Category Archives: leads

July 2022 Meeting Notes

Offline leads meeting – 7/13/22

Attendees: Tom Junk, Chris Backhouse, Andrzej Szelc, Kyle Knoepfel, Tingjun Yang, Tracy Usher, Erica Snider, Katherine Lato

LArSoft Status:

  • Multi-threading work
    • Mike Wang is continuing work on a DUNE dataprep workflow used for SN processing. Had been investigating a difference in results from hit finding when run in single threaded vs multi-threaded mode. The recob::Hit PR discussed at the July 12 LCM was one outcome of this work, and fixed (at least one of) the differences. From this point, he will continue to add workflow elements until it is entirely thread safe / multi-threaded.
  • Spack migration:  phase 2
    • Have started working on Phase 2 of Spack migration, which will involve additional adaptations to Spack to support the full set of functionality needed to manage coherent releases. Will also need to understand and possibly remedy dependency structure of code in order to make Spack happy. 
    • Chris Green kindly provided the following high-level list of tasks that make up Phase 2. (With a sixth step added by Tom Junk.)
      1. The experiments must  convert all of their code to use Cetmodules and modern CMake best practices (a la LArSoft phase 1).
      2. The experiments must also  produce and/or verify Spack recipes for their own packages, and for all external dependencies not directly supported by SciSoft.
      3. The current LArSoft stack and its dependencies must be verified to be buildable by Spack. There have been many changed/added dependencies since the last time this was done, so this is not a trivial task.
      4. We must have a system usable by LArSoft and experimental release managers capable of building and releasing a fixed and reproducible distribution of their code and all dependencies via Spack for all supported platforms and compilers. These distributions must be installable on supported systems with maximum (re-)use of pre-built and cached binaries, and minimum rebuilding of packages unchanged from one release to the next.
      5. We must have a multi-package development system capable of using and producing Spack-built binary packages for distribution via BuildCache.
      6. Validate everything on the release current at this point, obtain sign-off from all experiments, then execute the migration.

Note that items (1) and (2) involve changes to experiment code and repositories. The largest uncertainties in the scope and scale of work lie in items (4) and (5). Until these are understood, we cannot provide detailed task lists or timelines. In the mean time, experiments should work on (1) and (2), and open tickets or communicate with SciSoft team members when they encounter problems or have questions.

  • Tom: Add step to verify that the Spack-built code runs and produces comparable results as the UPS version.
    • Erica:  Yes! (added above)
  • Kyle: Does Chris talk about wrapping UPS products in Spack?
    • Erica: He did in conversations about the migration, but it was not clear (to me) exactly how that fits into the plan – whether it pertains to some or all legacy things for instance. 
    • Kyle: Chris is presenting the big work required with migration? If we have bridge technologies, that’s not covered yet?
    • Erica: Correct.  I asked for the big picture at this point so that we have a framework for discussing status and more detailed planning.
  • Workshop planning discussion
    • Points where we are seeking input
      • Feedback on the proposal circulated
      • Thoughts on specific problems / pieces of code that need to be made thread-safe or multi-threaded
        • Once code is identified, then the experiments should start identifying the teams that will come to the workshop to work on things.
      • What if any tutorials might be helpful at the beginning of the workshop?
      • We’re looking at 3 or 4 days for this. When might be a good time? Or maybe better, when are bad times?
    • Discussion
      • Andrzej:  Is this more a thing for experts, or people to learn? Saw comments about tutorials. And in person?
      • Erica:  In my mind, a dual purpose. Acquaint more people with multi-threading techniques and solve particular problems of immediate relevance to the experiments. 
        • Target will be for experienced C++ coders. So not beginning grad students if we are to solve a real problem.
        • Are advantages to working in-person – engage with experts more easily. But expect this will not be practical.
      • Also in the proposal, to  work in small teams, each working  together on a single piece of code. Hack-a-thon style.
        • Work on code that matters.
        • Have seen this model work with the right technology. So have to put some effort into identifying “google docs for coding.”
        • Andrzej: thinks the hack-a-thon idea makes it more enticing. Having some kind of introduction at the beginning would be good. We haven’t identified where the problems are.
      • Erica:  Also in the proposal, first have the experiments talk about what problems they’re trying to solve with multi-threading. Particular solutions will depend on the code. “These are the problems. These are the approaches to fix it.” Like for the database, need concurrent caching. Art provides this. Could provide a tutorial for how to use concurrent caching. So target tutorials to the solutions needed. Or might encounter an unanticipated problem along the way and decide a tutorial would help, so stop and learn about a solution.
      • Ensuing discussion concluded that workshop / hack-a-thon would be best if focused on cases where we know there is a problem, but do not yet know where, and do not yet know the solution. For things where we do know a solution, we might not need a workshop / hack-a-thon session. 
        • Seemed to be general agreement on this point (?)
      • Andrzej:  Should each experiment identify the problem, talk to LArSoft team for advice, then everyone comes in with a defined problem.
        • Yes. It’s important that everyone comes in with a well-defined problem. 
        • Do not want to front-load too much work, but this seems a reasonable approach. If we can’t find such problems, then we don’t need to waste people’s time with a workshop, and can instead focus on facilitating fixing the specific pieces of code that need fixing.
      • Kyle:  suggested reviewing slides / talks from the previous workshops on multi-threading (though the team would be amenable to repeating some of them)

Links to relevant slides and videos of talks:  

      1. 2017 presentation Introduction to multi-threading
      2. 2019 Presentation – Multi-threaded art 
      3. 2019 Presentation – Making code thread-safe
      4. 2019 Presentation (powerpoint download) – Experience learning to make code thread-safe
      5. 2019 Presentation Introduction to multi-threading and vectorization

Round-table:

DUNE: Tom Junk

  • He ran the cetmodules migration script Chris had in Feb., and made all the “required” changes, but not all of the “recommended” changes. There are a bunch of find_ups_products. Do those need to go away? [Yes, believe so.]  Not using cetmodules yet, did it in a practice run, but can flip the switch at any point.
  • Not done with a similar thing for GArSoft. Haven’t tracked down the alternative [libraries??]. Required latest version of Tensorflow and products from LArSoft that use Tensorflow. That all works. Currently, GArSoft is stuck on Pandora.
  • Thinking about how to handle large scale of raw / processed digits. Talking with many people. Tied in with multi-threading, although multi-threading may be icing on the cake, since plan to manage by constructing workflows that operate at APA level [from file i/o through data prep and deconvolution]. Issues with file I/O we still have to deal with. Have been consulting with Kyle on this.
    • Kyle: only framework support applicable is stuff (like removing cache) which Tom is aware of. Or alternating way data products are stored. That’s a big change. They’re reading one APA at a time. Things could be improved a bit. Framework does support the concept of an abstract delayed reader. That doesn’t get away from the basic problem they’re having.  
    • Tracy: Before ICARUS could run multi-threading, there’s some services that needed to be changed. Two of them, maybe Detector ones. 
    • Kyle: DetectorPropertiesService and DetectorClocksService are already thread safe. ChannelMappingService and the services that access things in databases are still issues. Saba & Kyle made a lot of progress, but didn’t get it finished. There is a dedicated branch for this.
    • This particular work was one of the casualties of the bleeding of effort from the project team. So have not made progress on it since Saba left.
    • Tracy: We would like to make use of this. We’re running single threaded jobs on three grid slots, effectively throwing away two cores.
    • Erica: The loss of effort has hurt us. The important thing now is to know exactly what services are the impediments in your case.
    • Tracy: I’ll try to follow up.

DUNE: Tingjun Yang

  • Working on simulating neutrino interactions in the Near Detector. HeI summarized this at the last LArSoft meeting. We figured out a way to save energy deposits in both detectors. Identified a few places we need to make the framework (LArSoft) more flexible to accommodate different detector types (eg, the geometry system). Hans provided a workaround for one of the problems, and Gianluca made some improvements to the Geometry service. 
  • Next want to work on the drift and detector response simulation. Need to think about how to get the location of the pixels, determine direction inside volumes, etc,, which will require changes to the geometry. 
    • Erica:  started work on this with Hans and Kyle (and Tingjun). Believe everyone agrees on the conceptual design, but need more discussion and more planning to make a detailed design that we can start implementing. Have been busy the past month, but will try to continue this work before the end of the month.

SBND: Andrzej Szelc

  • Had a SBND collaboration meeting end of June at Fermilab. People want to use different generators, some BSM generators. No one seems to know about the LArSoft work to make this easier, or the GENIE work. And would like Genie 3.2 as soon as it comes out.
    • Erica:  SciSoft team is getting weekly reminders about the need for this. Believe the holdup until now has been spack-related work, but now that Phase 1 is completed, should be able to prioritize getting GENIE updated.
  • Reconstruction of the photon detection progress.

SBN Data/Infrastructure: Chris Backhouse

  • Nothing to report.

ICARUS: Tracy Usher

  • Nothing to report.

May 2022 Meeting Notes

Offline Leads – May 25, 2022

Attendees: Chris Backhouse, Wesley Ketchum, Herb Greenlee, Joseph Zennamo, Tom Junk, Tingjun Yang, Erica Snider, Katherine Lato

LArSoft – Erica Snider

  • The migration of the Redmine LArSoft Wiki to GitHub Pages has been completed and is now available at https://larsoft.github.io/LArSoftWiki/. Among other things, this move should in principle allow search engines to index the LArSoft documentation, as was possible before the Fermilab web servers were put behind the Fermilab SSO. To date, however, Google searches do not find the LArSoft GitHub wiki.
    • Note: bing.com and duckduckgo.com do find LArSoft wiki pages on GitHub.
    • Should you have an edit or other content suggestion, you may let us know via issue tickets (which are still in Redmine), pull-requests on the LArSoft/larsoft.github.io repository in GitHub, or email to scisoft-team@fnal.gov.
  • Thread safety and multi-threading work:  Mike Wang has been working on a simplified DUNE SNB processing workflow that uses the 1D deconvolution in CalData instead of WireCell. The steps are:
    • CalData
    • GausHit
    • SPSolve
    • HitFD
    • TrajCluster
    • PMTrackTC
  • The CalData stage has been modified to use a thread-safe implementation of LArFFT written by Mike.  The GausHit hit finding stage is based on Mike’s implementation of a Levenberg-Marquardt fitter that Guiseppe Cerati and Sophie Berkman ported to LArSoft.  Mike is currently looking at the SPSolve stage, which is the 3D space point solver that also performs some disambiguation of hits.  Aside from making this stage thread safe, there is opportunity to incorporate multi-threading within the module as is done in the GausHit stage.
    • Mike is currently validating that multi-threaded and single-threaded execution yield the same results. As of May 16,  there are differences, so he was working to identify the source.
  • Work requested by SBND to implement legacy LArG4 behavior has been completed. Though missing the SBND deadline, the work is now on the head of develop, so will be available for future releases. A proposal for a long-term solution has been advanced. A follow-up discussion is needed to determine how to proceed from this point.
  • Spack migration
    • The Phase 1 migration of all LArSoft repositories to cetmodules is nearly completed. The process was relatively straight-forward in almost all cases. larrecodnn required some extra attention, which is not unusual given that it touches TensorFlow.
    • Expect work toward Phase 2 will begin shortly. Unlike Phase 1
      • The tools and procedures for building will change
      • Everything changes at once – we wake up one day, and everything will be different. There are no staged changes
    • Do experiments plan to follow this migration? (It might be required, but not yet sure.)
    • Q:  How will builds of external packages maintained by experiments work once we migrate?
      • Currently packaging a number of experiment code dependencies  in UPS
      • A:  not entirely sure. Will discuss this within the SciSoft team. SciSoft will in general provide assistance with migrating experiment code and external dependencies.
    • The details of the migration are not yet known, but will feature
      • A migration path that allows for testing of all relevant code prior to the switch
      • An education campaign for users
      • Assistance with migrating any experiment code needed 
      • Timelines developed in close consultation and agreement with experiments
    • Q:  What about legacy branches, eg, the MicroBooNE MCC9 series?
      • Presumably, things that live in the legacy world will continue to work within the legacy environment. Will not obviously even complicate back-porting code, since that usually does not involve elements of the build systems
    • A discussion with Chris about what NOvA is doing ensued, since they are art users. Not likely to be directly relevant to LArSoft, though could affect the timeline if an associated migration is needed.
    • Wes:  for SBN, there is the question of how to manage the Spack transition for the online environment
      • The DAQ is built on art-daq. Everything relies on relocatable UPS packages
      • Erica:  so this suggests that transition needs to happen during a beam downtime?
      • Not necessarily. Noted that they can operate in a legacy mode for some time, but best not to be in that position long-term. So just needs to be coordinated with SBN DAQ
      • Next SBN downtime:  Early July through mid-Sept, full beam back early Oct. (we think)
      • Wes will be working with DAQ people to discuss this.

Experiments:

  • DUNE
    • Tom: 
      • ProtoDUNE 2 takes data next year. Will use the DUNE DAQ. Wes knows more of the details.
      • DUNE can probably find people to do the Spack migration on the timescale we want, provided that the legacy system is available to everyone else. DUNE may require help with Phase 2 Spack migration if they get stuck.  
        • Erica:  This is expected. SciSoft will provide support.
    • Tingjun:
      • Discussing with LArSoft team about supporting simulation and detector.
        • Erica:  excited to have effort from the experiment directed into this long-standing interest for LArSoft. Will be working with the experiment to develop a plan for the necessary changes. That will involve changes to the geometry, so need to be clever so as to minimize the disruption from that.
        • Tom:  commented on possibility of dueling software stacks. Do not want to disrupt ability to continue code development
        • Erica:  prefer integration, but may be places were dueling stacks may occur. Want to minimize that unless there are clear gains. 
        • Note:  There is already a PR https://github.com/LArSoft/larsim/pull/94 and a resolved redmine ticket to address some of the issues related to this work: https://cdcvs.fnal.gov/redmine/issues/26961
  • ArgoNeuT:  Tingjun
    • ArgoNeuT imported the CVN product, which provides DL tools to select neutrino events.
      • Originally added to DUNE, and copied it from there into ArgoNeuT, but there was an issue trying to build it in ArgoNeuT. Once difference with DUNE usage was that DUNE followed the update to use the Triton package with CVN, but ArgoNeuT did not.
      • One idea to resolve this is to move the common code to LArSoft and only have the experiment specific part in each experiment repository. 
      • Working on it now, may need support from LArSoft.

 

  • SBN:  
    • Joseph
      • Chris Backhouse is taking over the SBND side of SBN for Joseph. 
      • SBN just launched the first large scale production of beam exposure. That’s been going well. 
      • Moving on to the next stage, at-scale production similar to one year’s exposure. 
        • Chris will be taking the lead on this. 
        • Maybe 100 million events. 
        • This will probably strain our systems, already see the need for performance and workflow improvements. 
          • Have been focusing on running all the basics that are needed at scale, but now performance upgrades are needed. 
          • Hope to adopt 2D deconvolution, overlay workflows, etc. 
          • Long-term, need to come back to understand how to lower [delta-ray] production thresholds in Geant4. Those improve fidelity, but come with a steep performance impact. 
    • Wes
      • We need to go through and plan the next production, software updates. Might be a number of requests that come in related to that. 
        • Eg., different lifetimes in different cryostats. 

 

  • MicroBoone: Herb
    • Making an effort to integrate MCC9 updates into develop. 
      • May require updating to refactored LArG4 framework. Would require help or advice for this migration
        • Otherwise, worried that they will be left behind. For instance, the new light simulation is probably the  most important new development that would be useful to MicroBooNE. This is in the new LArG4.
        • Erica:   very glad to hear that this is in the plan for MicroBooNE. The project will provide whatever assistance is needed. 
    • Worried about whether redmine will go away. 
      • MicroBooNE has not migrated to GitHub, and has not been pushing that.
      • One advantage of Redmine is that it has one landing page with links for wiki, repository, and issues. Would need to work to replicate this on GitHub. Otherwise people need to look in multiple places for everything.
        • Erica:  so far no indication there is an EOL date for Redmine. Will bring the question to Jim Amundson at next meeting with him.

Please email Katherine Lato or Erica Snider for any corrections or additions to these notes.

January 2022 Meeting Notes

The January 2022 LArSoft Offline Leads status update was handled via email and a google document.

LArSoft – Erica Snider

  • The 2022 LArSoft workplan was approved at the December Steering Group Meeting.
  • The project team continues to make progress toward rolling out phase 1 of the spack migration, but we do not yet have a timeline for completion. The target is still to be ready by the end of January. We will have additional information at the January 25 LArSoft Coordination Meeting.
  • We have performed a number of updates to LArSoft.org, including the addition of information about LArSoft on HPC. If you have experience with running LArSoft on HPC resources, or are working on development toward that goal, please let us know so that we can include it on this page.
  • LArSoft Redmine wiki:  the migration of the LArSoft wiki to GitHub is under way, with the release notes and about a quarter of the remaining pages validated post-migration. Issues with the converted markdown currently limit the rate of progress. We will provide additional information at the January 25 LArSoft Coordination Meeting.
  • In response to numerous questions, we’ve added information on how to cite LArSoft. This is available at:  https://larsoft.org/citing-larsoft/ 

MicroBooNE – Herbert Greenlee

MCC9-related updates were merged into larsoft and uboone suite integration releases as of version v09_41_00.  Refer to talk in Dec. 14, 2021 larsoft coordination meeting.

SBND – Andrzej Szelc

Material, including videos, from the 6th UK LArTPC Software and Analysis workshop in November 2021 is available and on the LArSoft training website.

This informal workshop was intended for LArTPC Software/Analysis beginners (mostly PhD students and post-docs). The aim was for new collaborators on LArTPC experiments to become familiar with the software and analysis tools commonly available to experiments such as MicroBooNE, SBND, DUNE, protoDUNE and ICARUS. The workshop was held in a hybrid mode at the University of Edinburgh and online.

SBN Data/Infrastructure – Joseph Zennamo, Wesley Ketchum

Working through large-scale production testing ahead of next major SBN release. There are major issues in SBND with memory, related to moving to the refactored LArG4 and lingering issues in the ‘rollup’ of truth information from showers. We met with Hans/others earlier in the month and developed a plan, but haven’t seen progress on that yet. This is an urgent need for simulation (and Dom Brailsford reports this is affecting DUNE as well).

DUNE – Heidi Schellman, Tingjun Yang, Michael Kirby

No Report

ICARUS – Daniele Gibin, Tracy Usher

No Report

LArIAT – Jonathan Asaadi

No Report

Please email Katherine Lato or Erica Snider for any corrections or additions to these notes.

August 2021 Meeting Notes

This Offline Leads status update was handled via email and a google document.

LArSoft – Erica Snider

  • Making progress on art 3.09 migration, and have a third release candidate. We are aiming to transition LArSoft to art 3.09 during the week of Aug  9 or 16, depending upon what additional problems are found
  • After art 3.09 is in place, we expect to be in a position technically to migrate LArSoft to a build system based on cetmodules with a spack back end that provides backwards compatible support for UPS. (See the presentation by Chris Green at the Feb 23, 2021 LArSoft Coordination Meeting for some discussion of this migration). Work on this migration will begin immediately after the art 3.09 migration. 
    • Prior to rolling out the new system, we will provide experiments an opportunity to review documentation and our user support, along with a release candidate with the new system. We will seek explicit sign-off from the experiments prior to migration.
    • After this migration, we will begin work on phasing out UPS in preparation for a move to the final spack-only system. Additional user resources will be provided prior to that change.
  • Progress on thread-safety has slowed. The current focus is on converting services that access the database to use the art concurrent caching support infrastructure.
  • Kyle is working on preparing a profiling and optimization presentation, as requested in issue #25831. He proposed three separate 30-minute sessions:
    1. Basics of CPU and memory usage (stacks, caches, heap allocations) and guidelines for their use
    2. Tools for profiling your programs
    3. Stepping through profile results of a sample program

DUNE – Andrew John Norman, Heidi Schellman, Tingjun Yang, Michael Kirby

DUNE has scripts to split up dunetpc, but is waiting for Dom Brailsford to commit a rearrangement of the services fcl files which affect the ability of unit tests to run independently.  They plan on moving to GitHub for the new split repositories.  Heidi and Andrew are evaluating ways DUNE collaborators should use GitHub now that username/password access is disabled and tokens or SSH keys are required.

ICARUS – Daniele Gibin, Tracy Usher

No Report

LArIAT – Jonathan Asaadi

No Report

MicroBooNE – Herbert Greenlee

At the Aug. 24, 2021 LArSoft coordination meeting, Herb presented a plan for reconciling the LArSoft version of data product ParticleID (package lardataobj) with the MicroBooNE MCC9 version.  The long term goal is for MicroBooNE to merge its MCC9 production release updates into the develop branch.  Some follow up work is required to decide between the strategy of updating ParticleID on the develop branch to match MCC9, or adding an entirely new data product class.  The sticking point all along has been backward compatibility with data files written using older versions.

SBND – Andrzej Szelc

No Report

SBN Data/Infrastructure – Joseph Zennamo, Wesley Ketchum

SBN is preparing for the August production in advance of the larger October production push. 

As part of this SBND has migrated to using the latest refactored LArG4 where they have observed issues with the MCParticle collections containing non-unique TrackIDs and SegFaulting when trying to access trajectory information. They have followed up with experts. 

Please email Katherine Lato or Erica Snider for any corrections or additions to these notes.

July 2021 Meeting Notes

Offline Leads meeting – July 15th, 2021

Attendees: Miquel Nebot-Guinot, Andrzej Szelc, Wesley Ketchum, Tom Junk,  Erica Snider, Katherine Lato

LArSoft:

  • Working on making services that access the database use the caching system. (What Kyle Knoepfel presented at the LArSoft Coordination Meeting in November, 2020.)
  • Have been working through issues related to art 3.09 migration.
    • Recently resolved two root issues. One still being tested. 
    • Takes time to iterate on issues in the product stack
    •  Expect to be completed soon.
  • First phase of SPACK migration requires art 3.09, and expect will follow relatively quickly after migration to art 3.09. This phase will be compatible with UPS and mrb, so will not require major changes in how we do things.

Round Robin:

  1. SBN Data/Infrastructure  (Wes, Miquel) 
    1. Need to think about the online systems for SBN. We use UPS local products, and run two environments — one on the DAQ side so more real-time/online, the other on data quality so more like offline. We need to get experience with this for the Spack transition, but nothing appears to be outside current methods. The way we do stuff in the online system mirrors what is done in the offline. 
    2. Looking to freeze the code and get things in order for the next few months. It is advantageous to us to have the new build as soon as possible. ICARUS major physics run in fall. Hoping to get frozen the pieces for the ICARUS data reconstruction. When we freeze the code, we’re going to want to optimize it. May reach out for help on profiling. Hopefully in 2-3 weeks, we’ll have code that does what we want and will have dedicated time for optimizing. This will be our general pattern moving forward:  freeze functional production code, dedicated time for optimization
  2. SBND: (Andrzej)  Getting ready to move to new Geant4 framework. Made a module to take the  CRT output from the new way (SimAuxDetHits) to the old (SimAuxDetChannels)? The module takes hits and packages them as channels. Two objects that are effectively identical. Is this something LArSoft would be interested in? 
    1. Erica:  Contributing that to LArSoft would be good. 
    2. Andrzej: Will let Ivan know to get in touch with LArSoft.
  3. DUNE: (Tom) 
    1. Been working on chopping DUNE TPC into pieces because the builds are slow. Chopped into smaller pieces by taking directories out and assigning them to UPS products. Not too different from how LArSoft arranges things. Wrote a script to do the chopping since code changes while working on the split. One issue, the FHiCL files don’t factor as easily as the code because they are included often and include many other files. The FHiCL files can depend on things not there in the code dependency tree, so if I put a file higher on the tree, it depends on things that aren’t there. Can get around this by setting up the whole tree, but then it’s just like dunetpc now.  For LArSoft, can people set up subsets or must they run the whole thing?
      1. Erica: LArSoft depends on having experiment code — no native detector, for instance, so can’t run anything outside the context of an experiment. Doing an integration type test therefore requires a lot of repositories. I would expect to set up everything to do integration tests. For unit tests, the repositories should be stand-alone if done correctly.. Mrb test runs unit tests at build time on one repository at a time. Historically, there were integration tests (so full art workflows) put in the unit test part. As an aside, would encourage DUNE to strip all that out, put all integration tests into CI workflows. Can define many workflows there if don’t want them all to be run automatically. Then make sure all unit tests are stand-alone, so testable one repository at a time with ‘mrb test’. 
    2. David Adams gave a talk yesterday. Again advocated structuring around art tools
      1. LArSoft MT work is de-servicing as much as we can, since at least some of the current services don’t need to be (e.g., (things where there is no need for global scope). Really just there to take advantage of art state transitions
      2. State transitions can be handled at module scope with tools in many cases. 
      3. Noted that ProtoDUNE pulls event data from a DB. Beam configuration is at the spill level (where a spill is ~15 sec long). Need to optimize DB access for these cases.
    3. Discussed FHiCL structures again w/in context of re-factoring repositories. Long discussion about trade-offs of aggregating configuration versus layering, shortcomings of the current scheme, other ways to organize the layering, the utility of base configurations,… Difficult to summarize, and no clear conclusions.
    4. Noted that since https access to Redmine repositories has been removed, many collaborations who want to develop code (international developers in particular) can no longer check out DUNE code.
      1. So want to deploy to GitHub. 
      2. Wes noted SBN was happy with move to GitHub and use of pull requests. Resulting in better and more stable code. Having pull request mechanism in place is helping to improve the quality and stability of the code. They are starting to do code reviews as the code comes in. Erica echoed similar situation for LArSoft. Particularly good that with pull requests, LArSoft is able to  test the code before merging. Tom not sure DUNE has the effort available. 
      3. Wes commented that there are instructions on Redmine for how to set up a mirror on GitHub, 
    5. Also a discussion of factoring along functional versus detector axes. DUNE has a lot of detectors, so much of the organization is along detector lines. A lot of the code is detector-specific. Can’t use ProtoDUNE code for DUNE FD.
      1. Wes offered to share examples of using two detectors — SBND and ICARUS with common SBN underneath. Driven by how people work. Have two collaborations working together, though, which may not fit the DUNE model as well.

Please email Katherine Lato or Erica Snider for any corrections or additions to these notes.

May 2021 Meeting Notes

This Offline Leads status update was handled via email and a google document.

LArSoft – Erica Snider

  • The previously proposed rollback of hdf5 package will not be necessary. We have the required e20 builds, which required patches to an externally supported package. Thank you to those who followed up with testing the rollback.
  • The migration to art 3.09 is in progress, and is expected to be completed by mid-May. This new version comes with three associated changes:  
    • e20 as the default build qualifier 
    •  new version of root that addresses a problem reading certain files (issue #25615). This version of art is also compatible with cet_modules, and will enable the first phase of migration to the Spack-based build system
    • Tensorflow v2.3
  • SBN previously requested assistance and possibly a tutorial on profiling tools and techniques. The SciSoft team is prepared to provide this assistance. SBN should make a specific request via the Offline Leads Meeting, or Redmine issue ticket
  • Update on the status of memory footprint increase reported by DUNE:  Tom Junk reported some progress on the DUNE side. There has been no further progress to report from the LArSoft side. Kyle Knoepfel is tasked with following up.
  • The project has no progress to report on geometry extensions for pixel detectors

DUNE – Andrew John Norman, Heidi Schellman, Tingjun Yang, Michael Kirby

dunepdsprce, dune-raw-data and dunetpc have been compiled and tested with e20.  It took a little maintenance as data read-in methods sometimes involved creating pointers to elements packed structures complaining about possible unaligned data; e20 emits a warning with these.  All fixed, though if someday in the future 32-bit objects get padded unless we say packed, we could be in for more maintenance.  Tom’s progress with the memory footprint issue consisted of identifying software components that take more memory in larsoft v09_16_00 as compared with v09_15_00, and a lot of it seems to be what ROOT loads with it.

ICARUS – Daniele Gibin, Tracy Usher

No Report

LArIAT – Jonathan Asaadi

No Report

MicroBooNE – Herbert Greenlee

No Report

SBND – Andrzej Szelc

No Report

SBN Data/Infrastructure – Joseph Zennamo, Wesley Ketchum

From email: We have opened a request for a profiling tutorial for SBN developers:

https://cdcvs.fnal.gov/redmine/issues/25831

Please email Katherine Lato or Erica Snider for any corrections or additions to these notes.