All posts by klato

June 2017 LArSoft Offline Leads Meeting Notes

Attendees:
Andrzej Szelek, Wes Ketchum, Thomas Junk, Herb Greenlee, Erica Snider, Katherine Lato

Round robin:

LArSoft – Erica and Katherine discussed the process of follow-up with issues, and how if it’s a ‘real issue,’ we’d like the Offline Leads to create the redmine issue so we know who to follow-up with. We then went through some of the current issues–but broke to have time for the round robin reports.

DUNE – Tom Junk

  • They’re dealing with a number of little things that are experiment-specific.
  • DAQ people from ProtoDUNE will give them error codes as well as other information per tick which is a lot of error information. Erica suggests putting it in a parallel structure so it’s easier to navigate. There will be some experiment-specific code that will have to know about it.
  • DUNE is interested in when the LArG4 restructure is done.
  • For best practices in DUNE, they are currently telling people to follow art’s and LArSoft’s. Note, LArSoft is happy to take suggestions to improve our recommended practices. This discussion related to an issue that art proposed a technical solution and a justification following best practices.

ICARUS– Daniele Gibin
No report

LArIAT – Jonathan Asaadi
No report

MicroBooNE – Wesley Robert Ketchum and Herb Greenlee

  • Wes has spoken with Hans and Wes has follow up to do with Hans on integrating with the LArG4 restructure work. Wes put this on hold but will try to find the time for this.
  • Both ICARUS and SBND state they would like to have separate storing of simulated energy depositions as an output of geant4, something that for use in event mixing and potentially translation of events within and among detectors (for systematic studies). I know other experiments are interested too.
  • There are basic modules for electron and photon (based on library) propagation, but it’s likely that more effort will be needed here (from other experiments or the project.) NEST currently has hooks to the G4 geometry/materials and his cannot cleanly be handled right now, for instance. This would also represent a doubling up of data, as energy depositions are stored in a slightly different way as part of the sim channels. I think design and infrastructure work may be necessary to evolve or consolidate how the information is stored and how things are properly backtracked. Overall, this deserves a solid review if here’s any significant changes planned, as I think we would prefer to do any change like this once and have it be stable for some time.
  • MicroBooNE spokes are concerned about giving credit to authors of code who contribute to LArSoft. And to experiments. During the discussion, we also mentioned including the institution. LArSoft has a place to give author credit for algorithms and services on larsoft.org – as well as in the code itself. This is not being done enough, so suggest a multi-prong education campaign to encourage people to sign their name on the code they write and to contribute to the http://larsoft.org/doc-algorithms/ page. This will be highlighted in a LArSoft Coordinate Meeting with follow up but this needs to be done by all experiments, not just LArSoft. The reason people should want to do this is that they may want a job reference some day and people need to be able to tell what they did.
  • Code should contain 5 pieces of information every time 1) Name, 2) from experiment, 3) from institution, 4) date 5) Good information about what this code does. Code was written by X author from experiment from institution X. It’s important to not imply that the code is for a given experiment–some people will assume that means it doesn’t apply to them.
  • LArSoft can and should supply templates, but how do we ensure that people use them? Use the template-generators and then you get a header with information defined. We should update those templates to include from experiment, and institution. Should be skeleton for generic files like algorithms, phython macros, tools, an effort to update them would be good, not just art-specific. But one problem is that most people develop on their laptops, so may not use the template generators. Still, we can take this as an issue — create skeletons for a document section in the top of header files for module code, algorithms, services. Post these to redmine. Make them part of skelton generators. May need a better name. Perhaps the first step is to talk about this at a Coordination meeting, and get a team from multiple experiments to work on this.
  • Also talked about automatic ways to determine who is doing work such as commit counter — and then do something with it. Like poking them to add something to larsoft.org.

SBND – Andrzej Szelek –

  • Very interested in Hans’ work, especially with wavelength and other things that changes the propagation times. The effect would show up in the photon library, but Hans hasn’t said he has functions that would replace the photon library. They’re interested in what is going on and are interested in anything that will speed things up. They are using the index of something now. The timing simulation takes that into account.
  • A large part of the simulation is in, thanks for all the help with that from Erica and Gianluca. They were using standard GEANT4 projections taking for granted, but they forgot to include it so are building a branch to put that in. They uncovered a bug in GEANT4 timing. There’s code in LArG4 that is copied verbatim from GEANT4. The charge loss is calculated and the first step is wrong, so the timing can be slightly off. It affects the distribution. They have a hacky fix that looks at this and fixes the velocity of the first step if need be. A systematic fix would be better, but GEANT4 says it works for them. There is something in GEANT4 that prevents it from being a bug for them, but it is for us. We are moving to GEANT4 photon propagation.
  • When they were trying to do a demo using event display, the mouse function didn’t work.

Issues from previous Offline Leads meetings to follow-up on and new issues from 6/14/17 meeting.

  1. From May 2017: How to handle different interaction time hypotheses for particles in an event.
    • Background: In ProtoDUNE, there can be four or more measurements of the time of an interaction, associated beam arrival time information, cosmic-ray counter hits, photon-detector hits, and a diffusion-based measurement. Some of these may have mis-associations and multiple candidate times reconstructed. This is probably just an analysis issue — what we want to do with these candidate times. The obvious thing is to use them to calibrate timing offsets in the detector, but there may be other uses, such as improving the reconstruction of particles broken in different volumes for other uses.
    • 6/14/17 update – The attendees discussed possible infrastructure work related to how to handle different interaction times, with some clarification of terms. All agreed that “t0” is not a good name, but that’s what we have for it. This value needs to be attached to different things, so the current practice is to create associations between t0 objects as needed, and let experiment analyses deal with the details. Tom (who brought this up last month) agreed that attempting to centralize any further handling of this information within LArSoft is not needed. This issue can be closed.
  2. From May 2017: Feedback at a recent SBN analysis meeting on proposed restructuring of G4 simulation step was that these were potentially very important changes. Is it possible to see all of this in place this summer for ICARUS large-scale processing and potentially MicroBooNE processing campaign? Along with the work itself, how one updates/does the backtracking remains an unanswered question: not technically difficult, but could imply significant changes in downstream code based on how it is done. This may take another or a few more people working on it to really see it through.
    • The project and schedule are being tracked in issue 14454 with expected completion in the middle of July. https://cdcvs.fnal.gov/redmine/issues/14454
    • After 6/14/17 consult with Hans on the current status – Work in Geant4 has been completed, and now need to re-structure and adapt LArG4. This portion of the work is being tracked in the above issue. The new version of LArG4 will use the artg4 toolkit, and will expose the energy deposition after particle tracing through the LAr as a new interface layer. The energy deposition will be performed via step limiting, so will eliminate the current usage of voxelization within a parallel geometry. Modeling of ionization and photon statistics will continue to be performed via some interchangeable unit that is independent of the rest of the code. Charge and photon transport will be factored into new modeling layers, where the latter is optionally performed by Geant4. Charge transport modeling will deliver arrival times and positions at a predetermined plane (e.g, a readout plane, a grid plane, etc). The current thinking is that details of how charge transport occurs within the anode region will be handled separately. An alternative is to allow Geant4 to deliver charge to each of the anode planes. This scheme provides for charge deposition with the anode region, but would complicate adapting the simulation to different readout schemes (e.g., 90-degree strips as in DP detectors, or readout pixels as proposed for DUNE ND). The photon transport modeling would deliver arrival times at photo-detectors after all wave-shifting. This scheme allows for complete Geant4 modeling of photon transport, including all wave-shifters (which are defined in the detector geometries), or alternatively, parameterized transport outcomes. The choice between the two options would be made via fhicl options (either disabling Geant4 photon transport and enabling the factored transport model, or visa versa). A modified backtracking scheme comes along with all this.
  3. From 4/19/17 meeting: DUNE wants something added to hit to store things for dual-phase. Robert Sulej is working on this with dual phase person. Additional hit parameters are used only in the event display. Having them inside hit would make code more simple. Having the parameters separated is reasonable as well since they are specific to dual phase. In the code branch we are developing there is a working solution to keep hit parameters in a separate collection, with no need for Assns.
    • 5/25/17 update – need to follow up to see if keeping the hit parameters in a separate collection works.
    • 6/14/17 update – Concluded that the current scheme of using additional data products for DUNE is adequate.
  4. From 4/19/17 meeting: Presentation from student on CNN and adversarial network? According to Robert Sulej, it is possible to think of making such machine-learning-based filter for the detector/E-field response simulation, but it is a future work. They are working now on the idea rather to provide data-driven training set for the CNN model preparation. Need time to understand the results to tell what is the limitation of the tool and what isn’t.
    • 5/25/17 update – This has been on the schedule for a coordination meeting, and has been postponed at the request of the authors
    • 6/14/17 update – No update
  5. From 3/22/17 meeting: Since SBND has been trying to include files inside GDML, have run into problems. Gianluca has been helping debug this. New version of ROOT may be a solution.
    • 4/18/17 update: The version of root used by LArSoft has changed a few times over the last few weeks. Andrzej Szelc said they haven’t looked into this because they were focusing on getting the basic geometry in. Once that is done, will look at this and hope the new ROOT version may have fixed it. So, no ticket has been written yet (to ROOT or LArSoft) about this as waiting on whether it is still an issue.
    • 5/25/17 update – still waiting.
    • 6/14/17 update – Once SBND gets the latest version of ROOT in, they’ll see if this fixes the issue.
  6. From 6/14/17 meeting: MicroBooNE asked about how to give credit to authors, experiments and institutions for LArSoft code written. Have updated the recommendation on documenting code at http://larsoft.org/important-concepts-in-larsoft/design/. We will also present a proposed documentation template for header files at a LArSoft Coordination Meeting. LArSoft has a place to give author credit for algorithms and services on larsoft.org – as well as in the code itself. This is not being done enough, so suggest a multi-prong education campaign to encourage people to sign their name on the code they write and to contribute to the http://larsoft.org/doc-algorithms/ page. This will be highlighted in a LArSoft Coordinate Meeting with follow up but this needs to be done by all experiments, not just LArSoft.

Please let us know if there are any corrections or comments to these meeting notes.

Katherine & Erica

LArSoft Workshop on Tools and Technologies – June 2017

Welcome – Panagiotis Spentzouris

Panagiotis Spentzouris welcomed participants and said the emphasis for LArSoft is on moving forward, improving technologies. Looking at the agenda, that’s what LArSoft is doing. As Panagiotis said, “Improvements in performance have been accomplished … Looking at the path of this project, things are on track.” Panagiotis’ welcome is available as an audio file below.

Introduction and welcome –  Erica Snider

Erica provided an introduction to the workshop, explaining that the reason for the workshop is that, “We want things that make our work easier, that help us produce better code and make our code run faster/more efficiently.” The workshop explores parallel computing, Continuous Integration, a new build system for art/LArSoft called Spack, debugging and profiling tools. Erica’s presentation is available as slides or as a video.

Introduction to multi-threading – Chris Jones

Chris explained that multi-threading is threads that can interact through shared memory, whereas multiple processes cannot. Multi-threading speeds up the program and enables the sharing of resources within the program itself. This is needed because computing hardware trends are that CPU frequencies no longer increase. Included in the presentation were examples of race conditions and that there are no benign race conditions. There are different levels of thread safety for objects:

  • thread hostile – not safe for more than one thread to call methods even for different class instances
  • thread friendly – different class instances can be used by different threads safely
  • const-thread safe – multiple threads can call const methods on the same class instance
  • thread safe – multiple threads can call non-const methods on the same class instance

Chris included things to get ready for multi-threading. This included removing ‘global’ variables, not using mutable member data for event data products and removing mutable state from services. A useful talk about C++ threading memory model is available at: https://channel9.msdn.com/Shows/Going+Deep/Cpp-and-Beyond-2012-Herb-Sutteratomic-Weapons-1-of-2

Chris’s presentation is available as slides or a video.

Vectorization and LArSoft – James Amundson

Single Instruction Multiple Data (SIMD) – a single instruction performs the same operation on multiple data items. We used to be able to just wait for hardware improvements to get faster, but that isn’t true anymore, as the following shows.

SIMD instructions have the potential to improve floating point performance two to 16 times. Jim showed examples of taking advantage of SIMD instructions, encouraging people to look at the different libraries that do this. Many widely-available math libraries include SIMD intrinsic support. He showed a LArSoft roofline analysis performed by Giuseppe Cerati using Intel Advisor.

Jim’s presentation is available as slides or a video.

LArSoft CI System Overview – Vito Di Benedetto 

Continuous Integration is a software engineering practice in which changes in a software code are immediately tested and reported. The goal is to provide rapid feedback to help identify defects introduced by code changes as soon as possible. Vito covered how to run CI and obtain results with detailed examples. The questions included wanting to see a sample script, so Vito brought up the Redmine page:  https://cdcvs.fnal.gov/redmine/projects/lar_ci/wiki. 

Vito’s pesentation is available as slides or a video.

Spack and SpackDev build system -James Amundson

LArsoft and art are moving to the Spack and SpackDev based build system. This system does not have one-for-one replacements for existing tools. Spack is a package manager designed to handle multiple versions and variants – https://spack.io/https://github.com/LLNL/spack. The original plan was to have a full demo at this workshop, but Spack development is behind schedule, so we could only explore Spack functionality. To try Spack, go to https://github.com/amundson/spackdev-bootstrap

Jim’s presentation is available as slides or a video.

Debugging tools – Paul Russo 

Paul explained the basics of using the gdb debugger to find and fix errors in code. The compiler must help the debugger by writing a detailed description of their code to the object file during compilation. This detailed description is commonly referred to as debugging symbols.

Paul’s presentation is available as slides or a video.

 Profiling Tutorial – Soon Yung Jun

Soon Yung gave an introduction to computing performance profiling with an overview of selected profiling tools and examples. Performance tuning is an essential part of the development cycle. Performance benchmarking quantifies usage/changes of CPU time and memory (amount required or churn). Performance profiling analyzes hot spots, bottlenecks and efficient utilization of resources. Soon also gave profiling results of the LArTest application with IgProf and Open|Speedshop.

Soon’s presentation is available as slides or a video.

Feedback

Comments include:

  • I found most of it useful. I’m fairly new to all of this, so it was mostly all new, useful information.
  • Increasing the portions for beginners or have a stand alone workshop for beginners.
  • More hands-on, interactive tutorials for the audience, even if it means installing some software/programs in advance.
  • I would like to have some topics which may be aimed towards participants who are less-than experts with the hope that us physicists can be more than just second-rate programmers.
  • Mostly; I thought the topics were good choices but would have preferred it to be a bit more hands-on.
  • I think these topics were the right ones; maybe even more on running on HPC resources next time.

Thanks to all who presented and/or participated.

May 2017 LArSoft Offline Leads Meeting Notes

This ‘meeting’ was handled via email and a google document. We heard from all but one experiment, SBND. If there are no objections, we may try this every other month. So we’ll meet in June, but do the google doc for July.

LArSoft – Erica Snider

  • The highest project work priorities are: LArSoft Workshop planning; continuation of association usability project work (which should be coming to a close soon); infrastructure and design adaptations for ProtoDUNE SP/DP and ICARUS; deployment of the new Spack-based build system; and re-factoring of LArG4. Other short term priorities can be found in the 2017 work plan document (May 2 update) here: https://indico.fnal.gov/conferenceDisplay.py?confId=14198 
  • As noted at the May 23 Coordination Meeting, details of the Spack / SpackDev based build system are not entirely finalized. In particular, details of the user interface are still open to changes based on feedback from beta testing and during presentations. We hope to have a deployment plan and schedule update at the next LArSoft Coordination Meeting. There will be a detailed discussion at the LArSoft Workshop. It seems likely that deployment will occur after the workshop.
  • LArSoft has been contacted by ArgonCube to discuss possible integration strategies for pixel readout LArTPCs.
  • ICARUS/MicroBooNE collaborators have re-written MicroBooNE signal processing classes to use art tools with a common abstract interface. This work is similar to that done by DUNE (David Adams) for ADC simulation and response functions. The project will help to coordinate these efforts in order to consolidate on a common set of interfaces to be used across all experiments for these processing steps. At that point, experiments will have the option of moving their now proprietary code into LArSoft to create a library of signal processing tools.
  • The LArSoft workshop will have both profiling and debugging tutorials based on user request.

DUNE – Thomas R Junk

  • We are focusing on getting ready for the ProtoDUNE run. The Track-shower CNN looks very promising but takes substantial CPU to run. Installation of TensorFlow is intended to help this issue.
  • We would like to thank Erica for giving the LArSoft tutorial at the DUNE collaboration meeting.
  • One thing which cropped up recently is how to handle different interaction time hypotheses for particles in an event. In ProtoDUNE, there can be four or more measurements of the time of an interaction, associated beam arrival time information, cosmic-ray counter hits, photon-detector hits, and a diffusion-based measurement. Some of these may have mis-associations and multiple candidate times reconstructed. This is probably just an analysis issue — what we want to do with these candidate times. The obvious thing is to use them to calibrate timing offsets in the detector, but there may be other uses, such as improving the reconstruction of particles broken in different volumes for other uses.
    • LArSoft response: “We should have a discussion at a LArSoft Coordination Meeting, or at a dedicated meeting to discuss strategies for handling this issue. It might be “just an analysis issue”, but to the extent that it potentially affects various position-dependent corrections, it might also need to be handled more rationally within the production reconstruction phases. Either way, it would be helpful to develop a consensus within the community as to how we will approach the problem.”

ICARUS – Daniele Gibin

  • The ICARUS spokesperson is Carlo Rubbia.

LArIAT – Brian J. Rebel

  • Brian has taken on the role of Technical Coordinator for the SBND and is stepping away from the LArSoft coordination. He will be available for consultations, but will no longer be able to contribute to any updates to the code or participate in the meetings.
  • Jonathan Asaadi will be fulfilling the role of Offline Lead for LArIAT.

MicroBooNE – Wesley Robert Ketchum

  • Gave overview of (previously) proposed restructuring of G4 simulation step at recent SBN analysis meeting, decoupling GEANT4 pieces with “propagation” pieces for ionized electrons. Feedback was that these were potentially very important changes, for both technical reasons (potential for reduction in memory footprint on grid, fewer rounds of simulation reprocessing, flexibility in combining different types of events) and physics reasons (ability to take energy depositions from neutrino interaction and translate them within one detector, or even from one detector to another to study systematics). Spoke today with Hans on where that stands and we are on same page with the other G4 restructuring work. Would be good to see all of this in place on the time scale of this summer for ICARUS large-scale processing and potentially MicroBooNE processing campaign: is that possible? Along with the work itself, how one updates/does the backtracking remains an unanswered question: not technically difficult, but could imply significant changes in downstream code based on how it is done. This may take another or a few more people working on it to really see it through.
    • LArSoft response: “The original schedule for this work was not met, and has not yet been updated. We will work with Hans to make that happen. Until then, we cannot comment on when the project will be completed, except to note that there has been steady progress on all of the underlying tasks. The project and schedule are being tracked in issue 14454.

SBND Roxanne Guenette, Andrzej Szelc

  • No report

Items from Offline Leads Meeting on 4/19/17 to follow-up on:

  1. DUNE wants something added to hit to store things for dual-phase. Robert Sulej is working on this with dual phase person. Additional hit parameters are used only in the event display. Having them inside hit would make code more simple. Having the parameters separated is reasonable as well since they are specific to dual phase. In the code branch we are developing there is a working solution to keep hit parameters in a separate collection, with no need for Assns.
    • 5/25/17 update – need to follow up to see if keeping the hit parameters in a separate colletion works.
  2. Presentation from student on CNN and adversarial network? According to Robert Sulej, it is possible to think of making such machine-learning-based filter for the detector/E-field response simulation, but it is a future work. They are working now on the idea rather to provide data-driven training set for the CNN model preparation. Need time to understand the results to tell what is the limitation of the tool and what isn’t.
    • 5/25/17 update – This has been on the schedule for a coordination meeting, and has been postponed at the request of the authors
  3. From 3/22/17 meeting: Since SBND has been trying to include files inside GDML, have run into problems. Gianluca has been helping debug this. New version of ROOT may be a solution.
    • 4/18/17 update: The version of root used by LArSoft has changed a few times over the last few weeks. Andrzej Szelc said they haven’t looked into this because they were focusing on getting the basic geometry in. Once that is done, will look at this and hope the new ROOT version may have fixed it. So, no ticket has been written yet (to ROOT or LArSoft) about this as waiting on whether it is still an issue.
    • 5/25/17 update – still waiting.

Please let us know if there are any corrections or comments to these meeting notes.

Katherine & Erica

Updated Continuous Integration – May 2017

Information moved to: https://larsoft.org/continuous-integration/ on 12/13/21.

Various studies have shown that the later in a project errors are found, the more it costs to fix them.[1][2][3] The Continuous Integration (CI) approach of software development adopted by LArSoft can save time by finding errors earlier in the process. As Lynn Garren noted, “Using CI allows you to test your code in the complete LArSoft environment independent of other people, which can save you time.” Another important benefit is noted by Gianluca Petrillo, “By using the CI system, I gain confidence that my code changes will not break other people’s work since the tests in the CI system will notify me if something went awry.”

The LArSoft Continuous Integration system is the set of tools, applications and machines that allows users to specify and automate the execution of tests of LArSoft and experiment software. Tests are configured via text files in the various repositories and are launched in response to an http trigger initiated manually by a user or automatically on every git push command to the develop branch of the central repositories. Arguments in the trigger specify run-time configuration options such as the base release to use, the branch within each repository to build (the develop branch by default), and the test “suite” to run, where each suite is a workflow of tests specified in the configuration files. Prior to each test, the new code is built against the base release. The CI system reports status, progress, test results, test logs, etc. to a database and notifies users with a summary of results via email. A web application queries the database to display a summary of test results with drill-down links to detailed information.

As Erica Snider said, “People focus on the testing needed for their particular experiment.  One benefit of our CI system is that, every time a change is made, it automatically tests the common code against all of the experiment repositories, then runs the test suites defined by each of the experiments. This allows us to catch interoperability problems within minutes after they are committed.”

There have been a number of updates to CI in recent months aimed at making the system easier and more intuitive for users, so please try it.

You can find more information about CI at http://larsoft.org/continuous-integration/ with detailed instructions on how to run jobs at: https://cdcvs.fnal.gov/redmine/projects/lar_ci/wiki

Training – March 2017

There is a new training page on the larsoft.org website that provides useful information on training for LArSoft in one spot. While designed for people new to LArSoft, it has links to information that may be of value to all. It can be found at  http://larsoft.org/training/

A valuable source of training are the annual LArSoft workshops. In 2017, the workshop will be held on June 20. The topics will be:

  • SPACK build system tutorial – In-depth, hands-on exploration of features, how to configure builds
  • Introduction to concurrency – What this means, why it is important, and what it implies for your code. A general discussion in advance of multi-threaded art release
  • Debugging and profiling tutorial
  • Continuous Integration (CI)  improvements and new features

Information will be added to the  Indico site including presentations.

–Katherine Lato

Expectations for Contributing Code to LArSoft – January 2017

Information moved to: https://larsoft.org/expectations-for-contributing-code-to-larsoft/ on 12/15/21.

While developing experiment-specific code, a developer may work on  a feature (such as an algorithm, utility, improvement) that can be shared with the community of experiments that use LArSoft. In most cases, this can be done easily since the modified code does not affect the architecture and doesn’t break anything. But when the feature affects the architecture, a few more steps are needed to ensure that such contributed code can be properly shared across experiments and integrates well into the existing body of code. This article outlines the process to introduce and integrate  into the core LArSoft code repositories both non-architectural features and architectural-affecting features.

The intent of the process is to achieve a smooth integration of new code, new ideas or improvements in a coordinated way, while at the same time minimizing any additional work required beyond the normal development cycle. Many changes will not require prior discussion. In cases with broad architectural implications getting feedback and guidance from the LArSoft Collaboration or the core LArSoft team early in the development cycle may be required.

Process for contributing a non-architectural, non-breaking change to LArSoft:

  1. Become familiar with the design principles and coding guidelines of LArSoft.
  2. Develop the code including comments, tests and documentation.
  3. Offer it by talking about it to LArSoft team members and maybe give a presentation at the LArSoft Coordination Meeting.

The rest of this article addresses a breaking, architectural change to LArSoft. Remember, all other cases will involve a more simplified process.

Process for contributing an architectural, breaking change to LArSoft:

  1. Someone working on an experiment has an idea, an improvement, or a new feature that affects the LArSoft architecture that can be shared in the core LArSoft repositories.
  2. Developer contacts the LArSoft Technical Lead or other members of the core LArSoft team to discuss the idea. Discussion can include an email thread, formal meeting, chat in the hallway, phone call, or any communication that works.
    • May find that the feature, or a suitable alternative, is already in LArSoft, thus saving the need to develop it again.
    • At this point, a decision will be made as to whether further discussion or review is needed, and where that discussion should take place. If other experts are likely to be required, a plan for including them in the process will be developed. The division of labor for the architectural changes will be discussed as well.
  3. Developer learns/reviews the design principles and coding guidelines of LArSoft.
  4. Developer prepares a straw proposal or prototype for the change.
  5. For major changes as determined in step (2), the proposal should be presented at the biweekly LArSoft Coordination Meeting. Depending on the feedback, more than one presentation may be useful.
  6. The developer writes the code, including comments, tests, and examples as needed, and keeps the LArSoft team informed on the status of work.
    • Any redmine pages or other technical documentation should be written during this time as well.
    • For new algorithms and services, an entry in the list of algorithms  should be made.
  7. When development is completed,  request that it be merged onto the develop branch since this is a breaking change.

There is a cost in making things workable for other experiments, but the benefit is that other experiments develop software that is usable by all experiments. The more this happens, the more all experiments benefit.

When designing LArSoft code, it’s important to understand the core LArSoft suite and all the components used by it. It’s also important to follow the design principles and coding guidelines so that what is developed can be maintained going forward. A good place to start is at Important Concepts in LArSoft  and by reading the material and watching the video about Configuration. It is important to follow the guidelines to have configuration-aware code that is maintainable with easy-to-understand configurations. The less time that is spent on debugging, the more time that can be spent on physics.

Once LArSoft contributors are aware of how LArSoft works, including the principles and guidelines for developing software, and have discussed the new feature with the LArSoft team, they will usually be asked to make a presentation at the LArSoft Coordination meeting. The idea here is to share the plan and the approach to solicit more ideas. Treat the initial presentation as a straw proposal–something that is designed to solicit feedback, not the final implementation that must be defended. At the same time, if a suggestion would double or triple the work required to implement it, and there isn’t a strong need for that suggestion to be implemented, it can be noted and set aside. The contributor is in charge of what he or she implements. The goal is to share software to reduce the development effort overall. It also encourages communication across experiments. The more collaborations can benefit from each other’s work, the better off we all are.

Details on developing software for LArSoft can be found at Developing with LArSoft wiki pageBy contributing to the common LArSoft repositories, improvements to the code can then be used by everyone.

— Katherine Lato & Erica Snider

CERN LArSoft Tutorial Report – November 2016

Introduction

CERN had a one-day LArSoft tutorial designed for those who are joining reconstruction and analysis efforts in Liquid Argon experiments but are new to the LArSoft framework. These sessions were video recorded. The recordings are available along with the presentations – see the links at the end of this write-up.

Organizers of the event would like to acknowledge Marzio Nessi for supporting the event and Audrey Deidda and Harri Toivonen for their significant help with solving organizational issues.

LArSoft tutorial overview

screen-shot-2016-11-21-at-9-22-28-am
Screen Shot from Video of the day

The participants came from many scientific research fields, not only from the Liquid Argon community, so in the morning session, the Liquid Argon TPC technology was presented. This introduction included discussion of a number of the challenges related to detection technology as well as challenges in  data reconstruction and analysis. This background was followed with a short introduction to the LArSoft framework.

The morning hands-on session was led by Theodoros Giannakopoulos who did an introductory talk  about the organization of the Neutrino Cluster at CERN. He also covered part of the LArSoft installation and explained how to set up the environment in Neutrino Cluster machines.

During the hands-on session, most of the participants logged into the Neutrino Cluster nodes using their own laptops with their own favorite operating system and shell. Having computing experts there was very important, especially Theodoros Giannakopoulos who was extremely helpful.

The purpose of the hands-on session was to learn how to run simulation and reconstruction jobs in LArSoft and how to visualize events  to inspect results. As an example we chose 2 GeV/c test beam pion in the ProtoDUNE-SP geometry. Robert Sulej’s slides guided participants during the hands-on session. In many places, we pointed to the in-depth materials from the Fermilab LArSoft tutorial and also linked to slides from the YoungDUNE tutorial, e.g. event display slides were very helpful.

The afternoon session started with a talk by Wesley Ketchum titled “Reconstruction as analysis.” The reconstruction in Liquid Argon is a challenging task, but developing reconstruction algorithms to overcome a problem is also a great source of satisfaction. It is also very important to make progress in the Liquid Argon data analysis. As Wes said, it is  the “process of taking raw data to physically-meaningful quantities that can be used in a physics analysis.” Wes presented an overview of reconstruction algorithms in LArSoft and several use cases from MicroBooNE data analysis experience.

screen-shot-2016-11-21-at-9-20-41-am
Diagram from Wesley Ketchum’s presentation

Wes prepared examples for the afternoon hands-on session during which he introduced Gallery as a light-weight tool for simulation and reconstruction results analysis appropriate also for the exploratory work before code is moved to the LArSoft framework. A short introduction  was followed by coding examples. The last part of the hands-on session explained how to “write and run your own module, step by step.”

Participants learned about the organization of LArSoft modules and algorithms and their configuration files. The aim of the coding exercise was to access information stored in data products (clusters) and associations between them (hits assigned to clusters) so participants could make histograms of simple variables such as number of cluster.  They also had the opportunity to learn about matching the reconstruction results to the simulation truth information.

Two weeks after the tutorial, several people are still in contact with us, progressing with their understanding of the framework and starting their real developments. The tutorial reached  many members of double phase LArTPC technology. This shows the advantages of having a detector agnostic approach where development efforts can be efficiently used. In the end we are all heading towards the same physics goals.


Links to Material

The video from the morning session is available here. It covers the following presentations:

  1. LArTPC detector basics, ProtoDUNE, DUNE goals  – Dorota Stefan(CERN/NCBJ (Warsaw PL))
  2. Introduction to LArSoft and DUNE tools – Robert Sulej, (FNAL / NCBJ)
  3. Tutorial part I – Dorota Stefan (CERN/NCBJ (Warsaw PL)) , Wesley Ketchum (Fermi National Accelerator Laboratory) , Robert Sulej (FNAL / NCBJ)

Note, you need a CERN user account to access cluster and do exercises using preinstalled software. It is possible to use local installation on a laptop. See the information at the beginning of the Indico page about the session.

The video from the afternoon session is available here. It covers the following presentations:

  1. Reconstruction as analysis? – Wesley Ketchum (Fermi National Accelerator Laboratory)
  2. Tutorial part II – Wesley Ketchum (Fermi National Accelerator Laboratory) , Robert Sulej (FNAL / NCBJ) , Dorota Stefan (CERN/NCBJ (Warsaw PL))

— Dorota Stefan & Katherine Lato

LArSoft/LArLite integration – September 2016

IMG_2852This summer, I helped create the first Liquid Argon Software (LArSoft) shared algorithm repository. Shared repositories are git-maintained directories of code that do not require  the underlying framework or other LArSoft dependencies. By using shared repositories,  development of common software can occur both in LArSoft and in a third-party system.  One such system is LArLite, a lightweight analysis framework developed by MicroBooNE scientist Kazuhiro Terao. Users of LArLite only need to set up the desired shared repositories to gain access to the code.

I worked on LArSoft/LArLite integration within the Scientific Computing Division (SCD),  which provides core services to the multitude of experiments being conducted at Fermilab. By enlisting computer scientists as a separate but well-connected group to the experiments, expert knowledge is focused to provide experiment analysts with an abundance of purpose built, robust computing tools.  I used the groundwork laid by a SCD project finished in the spring of 2016 to support shared repositories in LArSoft.

The computing needs of each experiment are unique, though all share some common elements. For example, many experiments at Fermilab employ art as their event processing framework, producing well-documented and reproducible simulations. Since its creation, the development of LArSoft has been guided by the needs of experiments to provide a sturdy framework for simulation, reconstruction and analysis of particle physics interactions in liquid argon. LArSoft is a collection of algorithms and code that exist in many git repositories, and corresponding products managed by the Unix Product Support (UPS) system. The UPS system provides a simple and safe method of establishing working environments on Unix machines while avoiding dependency conflicts. This system is available on most computing systems provided by Fermilab and can also be easily set up by users on their own systems. UPS is  relied upon for ensuring consistency between software environments. To cater to users’ needs, LArSoft has to be flexible to incorporate useful code developed independently of Fermilab supported machines.

The LArLite code repository  was written to allow for liquid argon analysis in a light­weight, python friendly system with a fast set­up procedure. Development and progress on algorithms has been made in this framework particularly because of its quick set­up time on personal computers that don’t have a well populated UPS area. Many of the core algorithms used in LArLite, however, are either not available in LArSoft or follow a very similar design to those in LArSoft causing a duplication of effort.

Having two copies of an algorithm exist in different build environments is undesirable as this creates discrepancies and incompatibilities. These discrepancies and incompatibilities become increasingly hard to rectify if changes are made on either side. For this reason, the shared repository must be able to be built with both native build systems. Another issue arises as LArLite was not developed with shared repositories in mind and has no tagged releases. LArLite only has a monitored master branch which may not be compatible with shared repositories in the future. For shared LArLite code to be reliably used within LArLite as well as LArSoft, a stable supported release needed to be created.

Working in the SCD, I created a first demonstration of using shared repositories for merging LArSoft and LArLite by moving the LArLite code GeoAlgo into the shareable repository larcorealg.  Building on the earlier groundwork, I ensured compatibility between the two native build systems, CetMake and GNUmake, and resolved dependency issues by using the UPS system and editing LArLite core files.  After this demonstration, a more complicated  LArLite algorithm, Flash Matching, was moved to a shared repository in coordination with the algorithm developer, Ariana Hackenburg. Here, LArLite services and data products were interchanged for their corresponding and newly shared versions  in LArSoft. An integrated LArSoft/LArLite system benefits all users, as it maintains the setup speed of LArLite to support a rapid development cycle, while allowing experiments like MicroBooNE to officially adopt and implement important algorithms as part of LArSoft with no additional effort. Most importantly, it allows users on both sides to access previously inaccessible algorithms and data.

The introduction of LArLite algorithms into shared git repositories demonstrates the potential for including other third-party software in a similar fashion. Collaborative liquid argon software in shared repositories will require the work and commitment of many physicists. It’s my hope that the effort is made to include more algorithms into shared repositories since it has the potential to improve liquid argon based experiments at Fermilab.

–Luke Simons

LArSoft Workshop Report – August 2016

The 2016 LArSoft Usability Workshop on June 22 and June 23 focused on usability, interfaces and code analysis. Over 50 people registered, filling the rooms. Remote attendance was supported as well. The section titles listed below are links to the material presented, where available.


Table of Contents:
1 – Videos
2 – Opening Remarks
3 – Recent Relevant LArSoft Efforts
4 – Gatekeeping
5 – Pandora
6 – Documentation
7 – SpackDev
8 – FHiCL-file
9 – FHiCL Case Study
10 – gallery
11 – Peer Review
12 – Code Analysis Process
13 – GENIE and Geant4
14 – Performance Profiling
15 – Feedback


During the meeting, a dozen issues were recorded at: https://cdcvs.fnal.gov/redmine/projects/larsoft/issues 
About half of the sessions were recorded. (Not by design. This was the first time we tried to capture video.) Note that these video recordings were not the main point of the workshop, and they have been edited to avoid side conversations at the beginnings and ends of presentations and beeps as people joined the bridge. Feedback is appreciated. Please contact klato@fnal.gov.

2016-06-23 10.25.232016-06-23 13.50.37

Erica Snider – Opening Remarks

The two-day workshop began with a review of usability and the conference by Erica Snider. LArSoft is the users’ code. The Fermilab LArSoft team is here to help and input from the users is critical to what we do. Interruptions were encouraged throughout the workshop.

Panagiotis Spentzouris – Welcome

Although Panagiotis Spentzouris was not able to attend the opening session, during his remarks on the second day, he said, “LArSoft is one of the centerpieces of our strategy for developing and supporting common tools for experiments with experiments.” It’s not an easy model. It requires discipline from the users and SCD support. The work of the experts on the SCD side and the users’ work has made it a success, and it is important to continue along this path.

Steve Brice – Steering Group Welcome

Steve Brice gave the steering group welcome. (You can listen to part of it below, if you like.)

The liquid argon effort involves many detectors in increasingly complex ways with experiments being able to learn from one another. The LArSoft team has achieved enormous success in connecting the different endeavors. Going forward, it will get more difficult. “Keep up the good work. It’s greatly appreciated and widely acknowledged.”

Gianluca Petrillo – Recent relevant LArSoft efforts

Usability is defined by ISO 9241-11 as, “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.”

After last year’s workshop, LArSoft interviewed a number of users and stakeholders, identified several areas of intervention and collected a list of desirable items.  We selected some of them: examples and use of associations. There is a wiki page listing the new examples, and we’re happy to write more. The examples follow best practices including documentation in Doxygen format. The LArSoft team endorses test writing and can provide help in writing tests. Several questions/comments covered issues about recording tests, how tests evolve, the need for user help to make them better as well as making the code better by extending the interface. The video of the presentation is available here.

Erica Snider – Maintaining balance in LArSoft: gate-keeping, usability, progress

We have to balance the ability to introduce new ideas and new features quickly with writing production-quality code. Users depend on production quality and developers depend on agility. Both of these are important to usability. Discussing code changes at a coordination meeting is necessary for all changes that affect behavior, i.e. most cases.

There are policies, guidelines and standards at all levels: design principles, coding guidelines, git branching model and documentation guidelines. The point of this model is to produce shareable, relatively uniform code that is recent. That’s why we have weekly releases. The focus is on finding consensus across experiments for changes, which requires gate-keeping. We want to find the right balance- enough agility to keep people interested in producing code with enough gate-keeping to keep people using it. Are we in the right spot? The video of the presentation is available here.

John Marshall – Lessons learned from collaborative software development using Pandora

The Pandora multi-algorithm approach to pattern recognition uses large numbers of algorithms (80+), each designed to address specific topologies and gradually build-up a picture of events. It relies on functionality provided by the Pandora Software Development Kit (SDK), documented in EPJC 75:439. Algorithms are structured around a number of key operations and can be written in pseudo-code form; what differs between real algorithms is the precise logic, based on topological information, that determines when to request operations such as merging or splitting Clusters (collections of Hits). Ideas about Event Data Model requirements, developer training, communication and style guides were presented, alongside feedback from Pandora developers. The best features include the ability to quickly test new ideas, that the SDK services can be trusted completely, the simple XML-based configuration and the visual debugging functionality (seeing a problem presented visually can lead to rapid understanding). Things that haven’t worked so well include the build mechanics (a difficulty associated with inclusion in multiple software frameworks) and attempts to provide some globally reusable features (the current geometry model, and some Hit properties, still lean towards their collider-detector usage). The video of the presentation is available here.

Katherine Lato – Documenting LArSoft for ease of use and learning

Good documentation is needed at all levels. For example, when documenting a LArSoft algorithm, it’s important to have a high-level view available at http://larsoft.org/list, which includes a link to a detailed document, like a technical note available to the entire LArSoft community. Comments in the header and in the implementation code should be in a format that enables Doxygen to interpret them. It’s also important to describe important parts in the code. See LArSoft wiki (Redmine) page Code Documentation guidelines for a suggested template to follow. Don’t forget in-line comments in the code for maintainability. The video of the presentation is available here.

Patrick Gartung – SpackDev: a new development environment for LArSoft

We are looking at new build systems because the current system using environment variables has problems running on new operating systems and in Linux containers. Spack-builds a stack of dependent software packages. It is open source, well documented and community supported with information available from:

Spack generates environment modules that can be used to set up the environment. It will have a different hash. There are still things that need to be worked out. The video of the presentation is available here.

Kyle Knoepfel – Configuration best practices, helpers and FHiCL-file validation

Kyle walked through setting up a FHiCL file, including how to set up a PROLOG, the #include facility and the rules about using it. The directory of a file to be included must be present on the FHICL_FILE_PATH environment variable. Don’t abuse #include. Only #include files that contain only prologs. A common frustration is how to know what parameters to specify for a given module. While looking at the source code is one solution, it would be better to devise a system that documents itself. Kyle demonstrated the ‘art –help’ and ‘art –print-available-modules’ facilities. For the ‘art –print-description’ facility, the allowed configuration is printed to the screen if users provide the appropriate C++ structure. The main point is that easy-to-maintain and understandable code means more time spent on physics instead of debugging coding errors.

Robert Kutschke – A case study in using FHiCL to ensure single points of maintenance

When designing its use of FHiCL, Mu2e set two goals: parameters should have a single point of maintenance and FHiCL that can be run interactively should be runnable on the grid without editing. Rob’s talk showed several FHiCL fragments from actual Mu2e MC production FHiCL files. One technique is to define base configuration fragments and apply deltas to that base; this naturally leads to deep configuration hierarchies. The trick to making an interactive FHiCL file grid-ready is to design the grid scripts and the interactive FHiCL files together – the grid scripts do their job by applying prefixes and postfixes to the interactive FHiCL file. We have never needed to “reach in and edit” a FHiCL file as part of a grid job.

Marc Paterno – Using gallery for data access: with demo and discussion

gallery provides access to event data in art/Root files outside the art event processing framework executable without the use of EDProducers, EDAnalyzers, etc., thus, without the facilities of the framework (e.g. callbacks from framework transitions, writing of art/ROOT files). The distribution bundle larsoftobj was introduced to give a single-command installation for all the UPS products needed to use gallery to read LArSoft-created art/ROOT files. Installation instructions are at http://scisoft.fnal.gov/scisoft/bundles/larsoftobj/ (look for the newest version and view the HTML file for instructions). Marc provided a demo that read through 100 files and filled three histograms. He showed the code and talked through it, answering questions.

‘Discussion of Ideas’ and ‘Using tips and techniques on your code.’

Several issues were recorded from the afternoon working sections.

Robert Kutschke – Peer review: It’s not just for physics anymore!

This is about things we should have been doing all along. The HEP Analysis Peer Review Process works really well and is similar. Rob covered what we do well with HEP and Integration Testing and some lessons learned from the Mu2e Reviews such as a lot of the value came from the prep work for the review. Lessons from the software development community are that many errors were found by the author when preparing for the review. Use the specialists. Since much value is in the preparations, that means having deadlines, profiling and prep presentation.

Erica Snider – LArSoft code analysis process

Writing good code is a process. Things would get better if we reviewed our code. Want the code analysis process to be as light-weight as possible for the situation and to be performed collaboratively with the code author(s). Erica went through the five basic steps of a review and the recent example of PMA analysis. LArSoft hopes to create a culture that seeks code analysis. Thanks to Mike Wallbank and Bruce Baller for volunteering their code for the afternoon code analysis during the workshop.

Robert Hatcher – LArSoft use of GENIE and Geant4

If you’re going to generate events with GENIE, read https://cdcvs.fnal.gov/redmine/projects/nutools/wiki/GENIEHelper. “It will save you a world of hurt.” The improvement in usability and performance of the interface to Geant4 needs to be a cooperative effort between the PDS group and LArSoft. The video of the presentation is available here.

Christopher Jones – You too can do performance profiling

Types of profiling include timing and memory. The tool igprof can do both. Chris covered how to use igprof. The video of the presentation is available here.

Code and performance analysis working groups

Two groups focused on analyzing two coding examples volunteered by their authors, Bruce Baller: TrajCluster algorithm and Michael Wallbank: BlurredCluster algorithm.

Feedback

Comments include:

  • I enjoyed talking about LArSoft and the future of the project. I was very encouraged by all that was said and feel exciting times lie ahead for the software! Also enjoyed the code analysis and discussion of new feature of art/fhicl.
  • The LArSoft steering group welcome set a strong, positive tone for the workshop.
  • It was better than expected. I supposed it was something more introductory but actually it was oriented to people who wants to contribute to LArSoft and not only be users. So it is very helpful for my work.
  • It was a little more advanced/high-level than I’d anticipated, but very useful.
  • It did not have as many overview talks as I expected. And there seemed to be few talks by the experiments.
  • Useful how-to instructions on slides. In the code review section, Mike Wallbank was cutting and pasting igprof commands from Chris Jones’s highly useful slides.
  • I was able to discuss with the experts about the issues I didn’t understand or what I thought it was missing.

Thanks to all who presented and/or participated!

Opening the box: event reconstruction using Pandora – June 2016

by  Jack Weston for the Pandora Team, Cavendish Laboratory, University of Cambridge

The Pandora multi-algorithm approach to automated pattern recognition affords a powerful framework for event reconstruction in fine-granularity detectors, such as LAr TPCs. 

With its origins at the proposed International Linear Collider (ILC) experiment, the Pandora project began in 2007 as a particle flow calorimetry algorithm; that is, an algorithm that reconstructs the paths of individual particles, taking advantage of high-granularity tracking detectors and calorimeters. Since then, Pandora has grown into a flexible software framework for solving generic pattern recognition problems that involve points in time and space, particularly suited to the “photographic quality” images produced by LAr TPCs.

Behind Pandora’s approach to pattern recognition is the multi-algorithm paradigm: the idea that each particular topology can be addressed by one or more relatively straightforward, self-contained algorithms. Under this paradigm, solving a complex pattern recognition problem, such as reconstructing particles in a LAr TPC (Figure 1), requires a sizable chain of such algorithms, whose order of execution is able to exhibit data-dependent nonlinearity and recursion. Some algorithms are very sophisticated, other rather simple; together, they gradually build up a picture of the events. This approach marks a significant departure from the traditional practice of single algorithms for shower finding and track fitting, for example, and promises to provide a robust way of tackling intricate pattern recognition problems. It also affords a rich development environment since many algorithms, each addressing a given topology in a different way, can be safely bound together and work in harmony. This approach is able to provide a more sophisticated and accurate reconstruction than would be reasonably achievable by writing one algorithm alone.

"Figure

Figure 1: A 3D Pandora reconstruction, projected into the w-view. The input hits are from a Monte Carlo simulation of a 0.8 GeV charged-current νμ event with resonant π0production. The event shows a proton (grey), a muon (red), and two photons (green, blue). The event is visualised using the Pandora event display, where the x axis represents a distance derived from drift time and the w axis a distance derived from the number of the wire recording the ionisation signal.

Figure 2: The Pandora reconstruction output in the LArSoft Event Data Model (EDM). The PFParticle object is associated with the 3D hits, clusters, tracks, seeds and vertices of the particle it represents. In addition, PFParticle parent-daughter links allow representation of a full particle hierarchy.
Figure 2: The Pandora reconstruction output in the LArSoft Event Data Model (EDM). The PFParticle object is associated with the 3D hits, clusters, tracks, seeds and vertices of the particle it represents. In addition, PFParticle parent-daughter links allow representation of a full particle hierarchy.

In a LAr TPC, the readout provides three 2D images of the event within the active detector volume. Each image shares a common coordinate, derived from the drift time. The second coordinate is derived from the number of the wire recording the ionisation signal in a given plane. The reconstruction begins with cautious, track-oriented 2D clustering before using a series of topological-association algorithms that conservatively merge and split the 2D proto-clusters. Considering pairs of 2D clusters, an extensive list of possible 3D vertex positions is produced. A score is calculated for each candidate vertex and the best one is chosen. The 3D vertex position can then be projected back into the readout planes and used, for example, to split the 2D clusters at the projected vertex position if required. To reconstruct 3D tracks, Pandora uses track-matching algorithms to exploit the time-coordinate-overlap between all possible groupings of 2D candidate clusters in different planes to predict the position of the cluster in the third plane. The time-overlap span, the proportion of matching sampling points and a chi-squared value are all neatly stored in a rank-three tensor, where the three indices are the clusters in each of the three views. The tensor is interrogated by a series of algorithm tools which identify any matching ambiguities and make careful changes to the 2D clusters until the tensor is diagonal and the cluster combinations are unambiguous. These matches are stored in ‘particles’, which provide a convenient means for collecting together objects reconstructed in the three readout planes.

Showers can then be reconstructed (in 2D) by attempting to add branches to long clusters that represent shower spines—the showers are grown outwards from a spine by identifying branches, then branches-of-branches, and so on. To synthesize the 2D shower information into 3D showers, the tensor ideas can be re-used: the fitted shower envelopes for each pair of views are used to predict an envelope in the third view, and the enclosed hit fraction in this view is stored in a tensor, along with other cluster-matching properties. An analogous process of tensor-diagonalisation then yields the best 3D matches. Finally, 3D hits are created and the track, shower and vertex information is brought together to produce a full particle hierarchy, where each daughter particle comprises metadata, a list of 2D clusters, a 3D cluster, a 3D interaction vertex, and a list of any further daughter particles (Figure 2).

Figure 3: Pseudocode demonstrating procedures for creating and merging clusters and their associated API calls. This logic is almost identical between algorithms, regardless of the pattern recognition problem.
Figure 3: Pseudocode demonstrating procedures for creating and merging clusters and their associated API calls. This logic is almost identical between algorithms, regardless of the pattern recognition problem.

Such a multi-algorithm approach poses a significant software challenge. The Pandora Software Development Kit provides a software environment designed specifically for this purpose: it offers the basic building-blocks for abstracting high-level structure from a collection of points, such as clusters, as well as the means for users to adapt these building-blocks to suit their problem. It also provides the framework for running the complex, nonlinear chain of algorithms required and automates the handling of the building-blocks, allowing developers’ algorithms to focus on physics rather than technicalities like memory management. This facilitates rapid algorithm development. Instances of the objects in the Event Data Model (EDM) are owned by Pandora Managers, which are responsible for object and named list lifetimes. These Managers automate a complete set of low-level operations that facilitate the high-level operations required by pattern recognition algorithms. The algorithms contain the step-by-step instructions for finding patterns in the input data and use APIs to modify, create or destroy objects and lists. Much of the technical difficulty in writing highly object-oriented algorithms like these is alleviated by Pandora’s provision of APIs able to access and manipulate objects in the Pandora EDM, such as getting a named list of hits or creating new clusters. These APIs can be used in algorithms to interact with Managers and provide common low-level logic in a concise way. Examples of the API calls that would be encountered in algorithms for creating and merging clusters are shown in Figure 3. Such procedures are common to all pattern recognition problems, differing only in the logic that determines the ‘best’ clusters, which is specific to the problem at hand. Pandora also contains a ROOT-based event display that allows the navigation of particle hierarchies, 2D and 3D hits, clusters and vertices, as well as providing invaluable visual debugging possibilities within algorithms.

Pandora works in conjunction with LArSoft by providing the ideal framework for the pattern recognition step. An art producer module creates Pandora instances, and configures and registers algorithms. For each event, hit information is passed from LArSoft into Pandora and, at the end, the Pandora reconstruction output, in the form of PFParticles, is passed back to LArSoft. Whilst LArSoft remains the core framework for the simulation and reconstruction process, this multi-algorithm reconstruction cannot be easily realised using LArSoft alone: Pandora provides the tools necessary for the job, as well as an environment conducive to developing algorithms under this paradigm. All interaction between LArSoft and Pandora is handled by the LArSoft module LArPandora, which serves as their mutual interface as well as translating Pandora’s inputs and outputs into the required formats (Figure 4). This module depends only on the library LArPandoraContent which, in turn, depends only on the Pandora SDK and visualisation libraries.

Figure 4: Cartoon indicating the structure of the Pandora packages in LArSoft. This includes the LArSoft module LArPandora, the LArPandoraContent library and the SDK and monitoring libraries. Note that monitoring library has a ROOT dependency. The SDK and monitoring library provide a framework that is independent of the pattern recognition problem to be solved: all the LAr TPC reconstruction logic resides in the LArPandoraContent library. The LArPandora module provides a LArSoft/Pandora interface for translating Pandora's inputs and outputs.
Figure 4: Diagram indicating the structure of the Pandora packages in LArSoft. This includes the LArSoft module LArPandora, the LArPandoraContent library and the SDK and monitoring libraries. Note that monitoring library has a ROOT dependency. The SDK and monitoring library provide a framework that is independent of the pattern recognition problem to be solved: all the LAr TPC reconstruction logic resides in the LArPandoraContent library. The LArPandora module provides a LArSoft/Pandora interface for translating Pandora’s inputs and outputs.

At MicroBooNE, two different algorithm chains are used: PandoraCosmic, optimised for the reconstruction of cosmic rays and their daughter delta rays, and PandoraNu, optimised for the reconstruction of neutrino events. By running the PandoraCosmic chain first, cosmic rays can be identified and removed, providing the input for the PandoraNu reconstruction. Starting by manipulating the three sets of hits in 2D before synthesising these into 3D structures, Pandora gradually builds up a picture of the event through clustering, vertexing, and track and shower reconstruction. The full reconstruction currently consists of nearly 80+ different algorithms that, together, abstract from the input 2D hits an information-rich 3D particle hierarchy, including information like particle types, vertices and directions.

At the upcoming DUNE experiment, the detector will comprise multiple LAr TPC detector volumes, each of which will provide two or three 2D images. Events may cross the boundaries between volumes, posing a new challenge for the reconstruction. To solve this problem, a Pandora instance is created for each volume, which provides a MicroBooNE-like reconstruction of that detector region. Bringing together these independent reconstructions, logic is then required to decide which particles to stitch together across gaps—and to determine which particle is the ‘parent’. While algorithm re-optimisation to suit the different beam spectrum is required, the reusable Pandora interfaces ensure that MicroBooNE developments are directly applicable to DUNE.

Pandora provides a flexible development platform for developing, visualising and testing pattern recognition algorithms. The LArPandoraContent library offers an advanced and continually improving reconstruction of cosmic ray and neutrino-induced events in LAr TPCs and is used by the MicroBooNE and DUNE experiments. Pandora’s GitHub page can be found at https://github.com/PandoraPFA, with an accompanying paper describing the SDK on arXiv: http://arxiv.org/pdf/1506.05348v2.pdf (EPJC 75:439).

The Pandora Team. Photo courtesy The Pandora Team.
The Pandora Team. Photo courtesy Boruo Xu.