- IntegrationTests/ : HDL top files and simulations for chains of HLS modules and memories.
- TestBenches/ : Test bench code.
- TrackletAlgorithm/ : Algo source code.
- TrackQuality/ : BDT Track Quality specific source code.
- emData/ : .dat files with input/output test-bench data (corresponding to memory between algo steps) + .tab files of data for LUTs used internally by algos.
- project/ : .tcl scripts to create HLS project, compile & run code.
Most of the following directions will reference Vivado HLS and the corresponding
vivado_hlscommand. You may use Vitis HLS instead by replacing all of thevivado_hlscommands withvitis_hls.
An HLS project can be generated by running tcl file with Vivado/Vitis HLS in firmware-hls/project/ directory. e.g. To do so for the ProjectionRouter:
vivado_hls -f script_PR.tcl
This would create a project directory <project> ("projrouter" in case of the above example). The project name is defined in the tcl script. To open the project in GUI:
vivado_hls -p <project>
cd IntegrationTests/ReducedConfig/IRtoTB/script/make -j $(nproc) Work(makes HLS IP cores; run as many jobs as you have CPU cores).vivado -mode batch -source runSim.tcl(runs Vivado simulation, which writes data output from chain to dataOut/*.txt).python ../../../common/script/CompareMemPrintsFW.py -p -s(compares .txt files in emData and dataOut/ writing comparison to dataOut/*_cmp.txt. Uses Python 3.).make Work/Work.runs/synth_1(runs synthesis, writes utilization & timing reports to current directory).make Work/Work.runs/impl_1(runs implementation, writes utilization & timing reports to current directory). N.B. This step is optional, and not required for code validation.
These have test data corresponding to the contents of the memories between algo steps. Their data format is explained in https://twiki.cern.ch/twiki/bin/view/CMS/HybridDataFormat .
e.g. AllStubs*.dat contains one row per stub: "stub_number stub_coords_(binary)[r|z|phi|...] ditto_but_in_hex"; StubPairs*.dat contains one row per stub pair "pair_number stub_index_in_allstubs_mem_(binary)[innerLayer|outerLayer] ditto_but_in_hex.
File naming convention: "L3" or "D5" indicate barrel or disk number; "PHIC" indicates 3rd course phi division given layer of nonant.
Some of the files are large, so not stored directly in git. These are automatically downloaded when any of the scripts in the project/ directory are executed within Vivado/Vitis HLS.
These correspond to LUT used internally by the algo steps. The .tab file are in C++ and can be #include and compiled. The .dat files are a list of hex numbers which are easier to read in in hdl.
The files that are downloaded by emData/download.sh were created by the CMSSSW L1 track emulation, with the the following recipe (adapted from the L1TrackSoftware TWiki).
cmsrel CMSSW_15_1_0_pre4
cd CMSSW_15_1_0_pre4/src/
cmsenv
git cms-checkout-topic -u cms-L1TK:fw_synch_250903
git clone https://github.com/cms-L1TK/MCsamples.gitA few configuration changes were made in order to output test vectors and lookup tables and adjust truncation. This required editing L1Trigger/TrackFindingTracklet/interface/Settings.h as follows:
--- a/L1Trigger/TrackFindingTracklet/interface/Settings.h
+++ b/L1Trigger/TrackFindingTracklet/interface/Settings.h
@@ -860,7 +860,7 @@ namespace trklet {
//IR should be set to 108 to match the FW for the summer chain, but ultimately should be at 156
std::unordered_map<std::string, unsigned int> maxstep_{
- {"IR", 156}, //IR will run at a higher clock speed to handle
+ {"IR", 108}, //IR will run at a higher clock speed to handle
//input links running at 25 Gbits/s
//Set to 108 to match firmware project 240 MHz clock
{"VMR", 108},
@@ -919,11 +919,11 @@ namespace trklet {
bool warnNoDer_{false}; //If true will print out warnings about missing track fit derivatives
//--- These used to create files needed by HLS code.
- bool writeMem_{false}; //If true will print out content of memories (between algo steps) to files
- bool writeTable_{false}; //If true will print out content of LUTs to files
- bool writeConfig_{false}; //If true will print out the autogenerated configuration as files
- std::string memPath_{"L1Trigger/TrackFindingTracklet/data/MemPrints/"}; //path for writing memories
- std::string tablePath_{"L1Trigger/TrackFindingTracklet/data/LUTs/"}; //path for writing LUTs
+ bool writeMem_{true}; //If true will print out content of memories (between algo steps) to files
+ bool writeTable_{true}; //If true will print out content of LUTs to files
+ bool writeConfig_{true}; //If true will print out the autogenerated configuration as files
+ std::string memPath_{"../data/MemPrints/"}; //path for writing memories
+ std::string tablePath_{"../data/LUTs/"}; //path for writing LUTs
unsigned int writememsect_{3}; //writemem only for this sector (note that the files will have _4 extension)
@@ -1000,7 +1000,7 @@ namespace trklet {
// Use chain with duplicated MPs for L3,L4 to reduce truncation issue
// Balances load from projections roughly in half for each of the two MPs
- bool duplicateMPs_{true};
+ bool duplicateMPs_{false};
// Determines which layers, disks the MatchProcessor is duplicated for
// (note: in TCB by default always duplicated for phi B, C as truncation is significantly worse than A, D)Then compilation was done with the usual command:
scram b -j8The algorithm was set to HYBRID_NEWKF and the maximum number of events was set to 100 in L1Trigger/TrackFindingTracklet/test/L1TrackNtupleMaker_cfg.py:
--- a/L1Trigger/TrackFindingTracklet/test/L1TrackNtupleMaker_cfg.py
+++ b/L1Trigger/TrackFindingTracklet/test/L1TrackNtupleMaker_cfg.py
@@ -25,7 +25,7 @@ GEOMETRY = "D98"
# 'HYBRID_DISPLACED_NEWKF_KILL' displaced tracklet followed by DR emulation and 5 param fit sim
# 'HYBRID_DISPLACED_NEWKF_MERGE' displaced tracklet followed by DR simulation and 5 param fit sim
# (Or legacy algos 'TMTT' or 'TRACKLET').
-L1TRKALGO = 'HYBRID'
+L1TRKALGO = 'HYBRID_NEWKF'
WRITE_DATA = False
Finally, the emulation was run with:
cd L1Trigger/TrackFindingTracklet/test/
cmsRun L1TrackNtupleMaker_cfg.pyThe wires files for the reduced configuration are currently stored in the cms-L1TK/cmssw repo (see next section for instructions for generating these).
If interface/Settings.h and test/L1TrackNtupleMaker_cfg.py have already been modified as for the full configuration above, then only one additional change is required before running with cmsRun:
--- a/L1Trigger/TrackFindingTracklet/test/L1TrackNtupleMaker_cfg.py
+++ b/L1Trigger/TrackFindingTracklet/test/L1TrackNtupleMaker_cfg.py
@@ -25,7 +25,7 @@ GEOMETRY = "D98"
# 'HYBRID_DISPLACED_NEWKF_KILL' displaced tracklet followed by DR emulation and 5 param fit sim
# 'HYBRID_DISPLACED_NEWKF_MERGE' displaced tracklet followed by DR simulation and 5 param fit sim
# (Or legacy algos 'TMTT' or 'TRACKLET').
-L1TRKALGO = 'HYBRID_NEWKF'
+L1TRKALGO = 'HYBRID_REDUCED'
WRITE_DATA = False
After running (cmsRun L1TrackNtupleMaker_cfg.py) in the TrackFindingTracklet/test directory copy the configuration files:
cd ../data/
cp wires_reduced.dat LUTs/wires.dat
cp memorymodules_reduced.dat LUTs/memorymodules.dat
cp processingmodules_reduced.dat LUTs/processingmodules.datBy default, the emulation currently uses the D98 detector geometry. However, if the recommended D110 geometry is desired, only one line needs to change in test/L1TrackNtupleMaker_cfg.py:
--- a/L1Trigger/TrackFindingTracklet/test/L1TrackNtupleMaker_cfg.py
+++ b/L1Trigger/TrackFindingTracklet/test/L1TrackNtupleMaker_cfg.py
@@ -15,8 +15,8 @@ process = cms.Process("L1TrackNtuple")
############################################################
# D110 recommended (but D98 still works)
GEOMETRY = "D98"
-#GEOMETRY = "D110"
+GEOMETRY = "D110"
# Set L1 tracking algorithm:
# 'HYBRID' (baseline, 4par fit) or 'HYBRID_DISPLACED' (extended, 5par fit).This geometry can be used to generate test vectors for either the full or reduced configuration.
The steps needed to generate the configurations files for the reduced combined module chains are explained below.
Step 1: Produce the reduced configuration
Copy over the wires.dat, memorymodules.dat, and processingmodules.dat from the full configuration for the reduced modules to the area where you are running the makeReducedConfig.py. I will assume that the names of these files are cm_wires.dat, cm_processingmodules.dat, and cm_memorymodules.dat.
We have three different reduced combined module configurations:
- The "CM_Reduced" configuration which is a skinny chain with on TP. To generate the configuation files for this configuration run the command:
./makeReducedConfig.py -w cm_wires.dat -p cm_processingmodules.dat -m cm_memorymodules.dat -s C -o CM_Reduced_ -t TP --no-graph- The "CM_Reduced2" configuration which implementes all TP in L1L2 and all barrel MP is L3-L6. To generate this configuration run the command:
./makeReducedConfig.py -w cm_wires.dat -p cm_processingmodules.dat -m cm_memorymodules.dat -s All -o CM_Reduced2_ -t TP --no-graph- The "CM_Barrel" configuration which has all barrel TP and MP. To generate the configuation files for this configuration run the commands (removes all disk-related modules):
cat cm_memorymodules.dat | grep -v "D[1234]" > CM_Barrel_memorymodules.dat
cat cm_processingmodules.dat | grep -v "D[1234]" > CM_Barrel_processingmodules.dat
cat cm_wires.dat | grep -v "D[1234]" > CM_Barrel_wires.datThis should produce the three files
- CM_{Reduced,Reduced2,Barrel}_wires.dat
- CM_{Reduced,Reduced2,Barrel}_memorymodules.dat
- CM_{Reduced,Reduced2,Barrel}_processingmodules.dat
respectively for the different configurations.
Purpose: Automatically run SW quality checks and build the HLS projects (csim, csynth, cosim, and export) for a PR to the master.
In order to keep the GitHub repository public we use GitHub Actions and GitLab CI/CD:
- GitHub Actions uses a public runner, the workflow is defined in .github/workflows/github_CI.yml
- GitHub Actions mirrors the repository to GitLab and runs GitLab CI/CD
- GitLab CI/CD uses a private runner (lnxfarm327.colorado.edu) and performs the SW quality checks and the HLS builds as defined in .gitlab-ci.yml
- SW quality checks are based on clang-tidy (llvm-toolset-7.0) and are defined in .clang-tidy and .clang-format very similar to CMSSW
- HLS builds are using Vivado HLS (or Vitis HLS) and are defined in the script files of the project folder
- Results (logs and artifacts) of the SW quality checks and HLS builds can be found here https://gitlab.cern.ch/cms-l1tk/firmware-hls/pipelines
- The default behavior blocks a stage (e.g. Hls-build) when a previous stage (e.g. Quality-check) failed
- GitHub Actions pulls the GitLab CI/CD status and the pass/fail outcome
- Add your branch name to the "on:" section of .github/workflows/GitLab_CI.yml
- In the "push:" subsection to trigger CI on each push, e.g. "branches: [feat_CI,<your_branch_name>]" and/or
- in the "pull_request:" subsection to trigger CI on each PR, e.g. "branches: [master,<your_branch_name>]"
This section details how to the track finder firmware can be integrated in the EMP framework and deployed on an Apollo board.
In order to reduce the number of redundant locations for EMP build instructions (and thus to reduce out-of-date information), please find the most recent EMP build instructions on the apollo site.
Currently, the EMP build is created in multiple steps. First, one runs the fpga1, fpga2, or all rule in this makefile, which in turns runs the makefiles in the CombinedConfig_FPGA1 or CombinedConfig_FPGA2 integration tests to download necessary LUTs/MemPrints, generate the pattern recognition VHDL wrapper, and compile the pattern recognition HLS modules. This is currently done with Vivado 2020 or earlier since the pattern recognition HLS is incompatible with Vitis, the successor to Vivado HLS.
Next, the project is generated using the ipbb tool. One points ipbb to one of the configuration files in this directory, which ipbb uses to generate the project. This includes adding relevant source and constraint files, running any setup tcl scripts, and recursively calling other configuration files in other dependencies such as emp-fwk and the KalmanFilter fit, which is currently included as a submodule in this repository.