Important
This document covers the nTuple production workflow.
The analysis workflow is documented in README_Analysis.md.
Warning
This framework is only tested for Fermilab LPC machines.
It might need some adjustments for Lxplus.
source /cvmfs/cms.cern.ch/cmsset_default.sh
cmsrel CMSSW_15_0_6
cd CMSSW_15_0_6/src
cmsenv
## Clone COMBINE tool
git -c advice.detachedHead=false clone --depth 1 --branch v10.5.1 https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit.git HiggsAnalysis/CombinedLimit
## Clone Combine Harvester
git clone https://github.com/cms-analysis/CombineHarvester.git
cd CombineHarvester
git checkout v3.0.0-pre1
cd ..
## Dijet Framework
git clone https://github.com/asimsek/DijetScoutingRun3Analyzer.git
## CMSSW_15_X: fix for CombineHarvester BuildFiles
sed -i 's/name="python"/name="python3"/g' \
CombineHarvester/CombinePdfs/bin/BuildFile.xml \
CombineHarvester/CombineTools/bin/BuildFile.xml
## Build once from CMSSW src
scram b clean; scram b -j"$(nproc --ignore=2)"From your new COMBINE area:
cd $CMSSW_BASE/src/HiggsAnalysis/CombinedLimit
# Add an extra repo for the necessary files (will remove at the end)
git remote add cmsdijetfunctions https://github.com/asimsek/CMSDijetFunctions.git 2>/dev/null || true
# Validate:
git remote -v
# Set the branch
git fetch --depth 1 cmsdijetfunctions mainNow define the explicit list of files you want to copy and check them out:
# Create a minimal list for your need:
FILES=(
'src/RooDijet*.cc'
'src/RooModExp*.cc'
'src/RooAtlas*.cc'
'src/RooModDijet*.cc'
'interface/RooDijet*.h'
'interface/RooModExp*.h'
'interface/RooAtlas*.h'
'interface/RooModDijet*.h'
)
git restore --source=cmsdijetfunctions/main --worktree -- "${FILES[@]}"# Remove:
git remote remove cmsdijetfunctions 2>/dev/null || true
# Validate:
git remote -vAdd the following to the src/classes_def.xml file (just before </lcgdict> line):
<class name="RooDijet3ParamBinPdf" />
<class name="RooDijetBinPdf" />
<class name="RooDijet5ParamBinPdf" />
<class name="RooDijet6ParamBinPdf" />
<class name="RooDijet5ParamPolyExtBinPdf" />
<class name="RooDijet6ParamPolyExtBinPdf" />
<class name="RooDijet7ParamPolyExtBinPdf" />
<class name="RooModExp3ParamBinPdf" />
<class name="RooModExp4ParamBinPdf" />
<class name="RooModExpBinPdf" />
<class name="RooModExp6ParamBinPdf" />
<class name="RooAtlas3ParamBinPdf" />
<class name="RooAtlas4ParamBinPdf" />
<class name="RooAtlasBinPdf" />
<class name="RooAtlas6ParamBinPdf" />
<class name="RooModDijet3ParamBinPdf" />
<class name="RooModDijet4ParamBinPdf" />
<class name="RooModDijet5ParamBinPdf" />
<class name="RooModDijet6ParamBinPdf" />Add the following to the src/classes.h file (end of the file is fine):
#include "HiggsAnalysis/CombinedLimit/interface/RooDijet3ParamBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooDijetBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooDijet5ParamBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooDijet6ParamBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooDijet5ParamPolyExtBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooDijet6ParamPolyExtBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooDijet7ParamPolyExtBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooModExp3ParamBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooModExp4ParamBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooModExpBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooModExp6ParamBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooAtlas3ParamBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooAtlas4ParamBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooAtlasBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooAtlas6ParamBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooModDijet3ParamBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooModDijet4ParamBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooModDijet5ParamBinPdf.h"
#include "HiggsAnalysis/CombinedLimit/interface/RooModDijet6ParamBinPdf.h"cd $CMSSW_BASE/src/HiggsAnalysis/CombinedLimit
scramv1 b clean; scramv1 b -j$(nproc --ignore=2)Important
Do NOT forget to validate/define the JEC paths to the data/cfg/data_jec_list.txt for data and data/cfg/mc_jec_list.txt for MC.
MC should be defined as a plain year (e.g., 2024, 2025, etc.); However, data should be defined era by era since each era usually has a different JEC configuration.
Important
GoldenJSON files are defined in the input cfg file (e.g.; inputFiles_PFScouting_NanoAOD/PFScouting_2024H_cfg.txt).
Validate/Pull the latest GoldenJSON file for your dataset from official repo: CAF-Certification
Collisions24 is for 2024 data and Collisions25 is for 2025 data.
Please also note that the GoldenJSON files are updated regularly; However, the DQM group can also update these files outside the data taking year! Validating the latest GoldenJSON file usage will be beneficial before any nTuple production activities.
Caution
Run the following commands inside the main directory: cd $CMSSW_BASE/src/DijetScoutingRun3Analyzer
## Create a temporary list:
cat > /tmp/nano_test_2024H.txt << 'EOF'
root://cms-xrd-global.cern.ch//store/data/Run2024H/ScoutingPFRun3/NANOAOD/ScoutNano-v1/2520000/03e073d9-ab61-450d-aa10-fa050e94a16b.root
EOF
## Framework uses `analysisClass.C` file to process data.
## Therefore, create a symbolic link (`symlink`/`soft link`) to use any of your "analysisClass" as `analysisClass.C`:
ln -sf analysisClass_mainDijetPFScoutingSelection_Run3_NanoAOD_Recluster.C src/analysisClass.C
## Generate the ROOT reader class (`rootNtupleClass.*`) so branch declarations and addresses match the NanoAOD file/tree you are about to process:
yes | ./scripts/make_rootNtupleClass.sh -f root://cms-xrd-global.cern.ch//store/data/Run2024H/ScoutingPFRun3/NANOAOD/ScoutNano-v1/2520000/03e073d9-ab61-450d-aa10-fa050e94a16b.root -t Events
## Compile your analyzer:
make clean && make -j8
## Start nTuple production:
./main /tmp/nano_test_2024H.txt config/cutFile_mainDijetPFScoutingSelection_Run3.txt Events test_NanoAOD_2024H_n0 test_NanoAOD_2024H_n0
## Optional: Validate the output:
root -l -q -e 'TFile f("test_NanoAOD_2024H_n0_reduced_skim.root"); TTree* t=(TTree*)f.Get("rootTupleTree/tree"); if(t) t->Print(); else f.ls();'## Create a temporary list:
cat > /tmp/nano_test_monitoring_2024H.txt << 'EOF'
root://cms-xrd-global.cern.ch//store/data/Run2024I/ScoutingPFMonitor/NANOAOD/PromptReco-v1/000/386/508/00000/8d57403b-0253-4f30-9624-494d9c843bb5.root
EOF
## Framework uses `analysisClass.C` file to process data.
## Therefore, create a symbolic link (`symlink`/`soft link`) to use any of your "analysisClass" as `analysisClass.C`:
ln -sf analysisClass_mainDijetPFScoutingSelection_Run3_NanoAOD_Recluster.C src/analysisClass.C
## Generate the ROOT reader class (`rootNtupleClass.*`) so branch declarations and addresses match the NanoAOD file/tree you are about to process:
yes | ./scripts/make_rootNtupleClass.sh -f root://cms-xrd-global.cern.ch//store/data/Run2024I/ScoutingPFMonitor/NANOAOD/PromptReco-v1/000/386/508/00000/8d57403b-0253-4f30-9624-494d9c843bb5.root -t Events
## Compile your analyzer:
make clean && make -j8
## Start nTuple production:
./main /tmp/nano_test_monitoring_2024H.txt config/cutFile_mainDijetPFScoutingSelection_Run3.txt Events test_monitoring_NanoAOD_2024H_n0 test_monitoring_NanoAOD_2024H_n0
## Optional: Validate the output:
root -l -q -e 'TFile f("test_monitoring_NanoAOD_2024H_n0_reduced_skim.root"); TTree* t=(TTree*)f.Get("rootTupleTree/tree"); if(t) t->Print(); else f.ls();'## Create a temporary list:
cat > /tmp/nano_test_QCD.txt << 'EOF'
root://cms-xrd-global.cern.ch//store/mc/RunIII2024Summer24NanoAOD/QCD_Bin-PT-1000to1500_TuneCP5_13p6TeV_pythia8/NANOAODSIM/140X_mcRun3_2024_realistic_v26-v2/120000/00ba6769-d1a4-4faa-be8f-2157b2a039d7.root
EOF
## Framework uses `analysisClass.C` file to process data.
## Therefore, create a symbolic link (`symlink`/`soft link`) to use any of your "analysisClass" as `analysisClass.C`:
ln -sf analysisClass_mainDijetPFScoutingSelection_Run3_NanoAOD_Recluster.C src/analysisClass.C
## Generate the ROOT reader class (`rootNtupleClass.*`) so branch declarations and addresses match the NanoAOD file/tree you are about to process.
## IMPORTANT: Don't forget to update the root file. You may need to use `root://cms-xrd-global.cern.ch/` or `root://cmsxrootd.fnal.gov/` to access dataset root files.
yes | ./scripts/make_rootNtupleClass.sh -f root://cms-xrd-global.cern.ch//store/mc/RunIII2024Summer24NanoAOD/QCD_Bin-PT-1000to1500_TuneCP5_13p6TeV_pythia8/NANOAODSIM/140X_mcRun3_2024_realistic_v26-v2/120000/00ba6769-d1a4-4faa-be8f-2157b2a039d7.root -t Events
## Compile your analyzer:
make clean && make -j8
## Start nTuple production:
./main /tmp/nano_test_QCD.txt config/cutFile_mainDijetPFScoutingSelection_Run3.txt Events test_NanoAOD_QCD_n0 test_NanoAOD_QCD_n0
## Optional: Validate the output:
ls -lh test_NanoAOD_QCD_n0_reduced_skim.root
root -l -q 'TFile f("test_NanoAOD_QCD_n0_reduced_skim.root"); TTree* t=(TTree*)f.Get("rootTupleTree/tree"); if(t) t->Print();'Note
HTCondor is used to parallelize nTuple production by splitting large datasets into many independent jobs, which significantly reduces total processing time and improves large-scale production throughput.
Caution
Run the HTCondor scripts/commands inside the dijetCondor directory:
cd $CMSSW_BASE/src/DijetScoutingRun3Analyzer/dijetCondor
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2024/PFScouting_2024C_cfg.txt --force-new-list --request-memory-mb 4096
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2024/PFScouting_2024D_cfg.txt --force-new-list --request-memory-mb 4096
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2024/PFScouting_2024E_cfg.txt --force-new-list --request-memory-mb 4096
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2024/PFScouting_2024F_cfg.txt --force-new-list --request-memory-mb 4096
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2024/PFScouting_2024G_cfg.txt --force-new-list --request-memory-mb 4096
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2024/PFScouting_2024H_cfg.txt --force-new-list --request-memory-mb 4096
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2024/PFScouting_2024I_cfg.txt --force-new-list --request-memory-mb 4096
## 2025C
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2025/PFScouting_2025C_v1_cfg.txt --force-new-list --request-memory-mb 4096
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2025/PFScouting_2025C_v2_cfg.txt --force-new-list --request-memory-mb 4096
## 2025D
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2025/PFScouting_2025D_v1_cfg.txt --force-new-list --request-memory-mb 4096
## 2025E
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2025/PFScouting0_2025E_v1_cfg.txt --force-new-list --request-memory-mb 4096
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2025/PFScouting1_2025E_v1_cfg.txt --force-new-list --request-memory-mb 4096
## 2025F
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2025/PFScouting0_2025F_v1_cfg.txt --force-new-list --request-memory-mb 4096
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2025/PFScouting0_2025F_v2_cfg.txt --force-new-list --request-memory-mb 4096
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2025/PFScouting1_2025F_v1_cfg.txt --force-new-list --request-memory-mb 4096
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2025/PFScouting1_2025F_v2_cfg.txt --force-new-list --request-memory-mb 4096
## 2025G
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2025/PFScouting0_2025G_v1_cfg.txt --force-new-list --request-memory-mb 4096
python3 condor_submit_nanoAOD.py -c inputFiles_PFScouting_NanoAOD/2025/PFScouting1_2025G_v1_cfg.txt --force-new-list --request-memory-mb 4096Tip
--no-submit: dryRun (create files without sending jobs to condor).
--cms-connect: Additional HTCondor arguments for CMS Connect machines.
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2024/PFMonitoring_2024C_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2024/PFMonitoring_2024D_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2024/PFMonitoring_2024E_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2024/PFMonitoring_2024F_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2024/PFMonitoring_2024G_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2024/PFMonitoring_2024H_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2024/PFMonitoring_2024I_cfg.txt --force-new-list
## 2025C
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2025/PFMonitoring_2025C_v1_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2025/PFMonitoring_2025C_v2_cfg.txt --force-new-list
## 2025D
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2025/PFMonitoring_2025D_v1_cfg.txt --force-new-list
## 2025E
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2025/PFMonitoring_2025E_v1_cfg.txt --force-new-list
## 2025F
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2025/PFMonitoring_2025F_v1_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2025/PFMonitoring_2025F_v2_cfg.txt --force-new-list
## 2025G
python3 condor_submit_nanoAOD.py -c inputFiles_PFMonitoring_NanoAOD/2025/PFMonitoring_2025G_v1_cfg.txt --force-new-listTip
--no-submit: dryRun (create files without sending jobs to condor).
--cms-connect: Additional HTCondor arguments for CMS Connect machines.
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT50to80_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT80to120_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT120to170_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT170to300_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT300to470_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT470to600_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT600to800_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT800to1000_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT1000to1500_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT1500to2000_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT2000to2500_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT2500to3000_cfg.txt --force-new-list
python3 condor_submit_nanoAOD.py -c inputFiles_QCD_NanoAOD/QCDMC_2024_PT3000toInf_cfg.txt --force-new-listWarning
A dummy (or real) GoldenJSON file definition in the MC config files is necessary to avoid any possible issues.
Given GoldenJSON won't be used for MC samples.
cd $CMSSW_BASE/src/DijetScoutingRun3Analyzer/dijetCondor
# Build once
c++ -O3 -march=native -DNDEBUG -o check_condor_outputs check_condor_outputs.cpp $(root-config --cflags --libs)# Data (Scouting)
./check_condor_outputs cjobs_ScoutingPFRun3_Run2024G_ScoutNano_v1_NANOAOD_05March2026_13 /eos/uscms/store/group/lpcjj/Run3PFScouting/nanoAODnTuples/2024/ScoutingPFRun3/ScoutingPFRun3_Run2024G_ScoutNano_v1 --config inputFiles_PFScouting_NanoAOD/2024/PFScouting_2024G_cfg.txt
# Data (Monitoring)
./check_condor_outputs cjobs_ScoutingPFMonitor_Run2024H_PromptReco_v1_NANOAOD_03March2026_02 /eos/uscms/store/group/lpcjj/Run3PFScouting/nanoAODnTuples/2024/ScoutingPFMonitor/ScoutingPFMonitor_Run2024H_PromptReco_v1 --config inputFiles_PFMonitoring_NanoAOD/2024/PFMonitoring_2024H_cfg.txt
# QCD MC
./check_condor_outputs cjobs_QCD_Bin-PT-80to120_TuneCP5_13p6TeV_pythia8_NANOAODSIM_03March2026_04 /eos/uscms/store/group/lpcjj/Run3PFScouting/nanoAODnTuples/2024/QCD_Bin-PT-80to120_TuneCP5_13p6TeV_pythia8 --config inputFiles_QCD_NanoAOD/QCDMC_2024_PT80to120_cfg.txt --check-subdirsTip
Use --check-subdirs to include sub-folders of the given directory for the root file search.
Use --job-start and --job-end in order to give a range that requested to be checked.
Use --noPerFile: skip per-job matching and do only total event comparison.
Use --threads N: override the C++ worker count. If omitted, it uses hardware concurrency.
Use --resubmit --request-memory-mb 4096 to resubmit jobs for the missing files and request more memory for them.
Only target EOS folder can be given (without cjobs_ folder) to count the event number and compare with the dataset - even single root file can be given for this purpose.
Warning
The following brilcalc setup instruction is prepared for Fermilab LPC.
Please follow the official brilcalc instructions for lxplus.
### Find which brilconda python exists on your node (pick one that exists)
ls -d /cvmfs/cms-bril.cern.ch/brilconda* 2>/dev/null
### Prefer a python3-based one if present (example: brilconda310)
BRILPY=/cvmfs/cms-bril.cern.ch/brilconda310/bin/python3
### Install brilcalc
$BRILPY -m pip install --user --upgrade brilws
export PATH=$HOME/.local/bin:$PATH
## Validate
which brilcalc
brilcalc --versionpython3 condor_report.py -c inputFiles_PFScouting_NanoAOD/PFScouting_2024H_cfg.txt -o processed_lumis_2024H.json \
--brilcalc --unit /fb --normtag /cvmfs/cms-bril.cern.ch/cms-lumi-pog/Normtags/normtag_PHYSICS.json \
--no-golden \
--brilcalc-extra -c "$CMS_PATH/SITECONF/local/JobConfig/site-local-config.xml"
python3 condor_report.py -c inputFiles_PFScouting_NanoAOD/PFScouting_2024H_cfg.txt -o processed_lumis_2024H.json \
--brilcalc --unit /fb --normtag /cvmfs/cms-bril.cern.ch/cms-lumi-pog/Normtags/normtag_PHYSICS.json \
--golden-json \
--brilcalc-extra -c "$CMS_PATH/SITECONF/local/JobConfig/site-local-config.xml"Tip
--brilcalc argument can be used to perform brilcalc on the newly generated json file.
--input-json <fileName>.json argument can be used to process an existing json file with brilcalc.
--golden-json can be used to filter output json according to the goldenJSON file (oposite is --no-golden).
brilcalc lumi -c "$CMS_PATH/SITECONF/local/JobConfig/site-local-config.xml" -u /fb \
-i processed_lumis_2024H.json \
--normtag /cvmfs/cms-bril.cern.ch/cms-lumi-pog/Normtags/normtag_PHYSICS.jsonNote
This script allows you to merge (hadd) the sample outputs.
The result is one ROOT file per sample type (e.g.; one output for 80to120, one for 120to170, etc.).
This is a necessary step for MC scaling with xSec values.
python3 merge_samples.py /eos/uscms/store/group/lpcjj/Run3PFScouting/nanoAODnTuples/2024/QCDSamples --sample-glob "QCD_Bin-PT-*"Tip
--sample-glob argument lets you select which sample folders under the base directory will be merged.
Note
If you have some jobs that are in the "Hold/Held" state, use the following commands CAREFULLY to understand the problem!
And then either release the job(s) for re-processing, or remove the it/them completely!
## Learn the hold reason of the jobs:
condor_q -hold -af HoldReason
## Release the job for re-processing:
condor_release -const 'jobstatus==5' -name lpcschedd6
## WARNING!!
## This removes all the jobs with the same `.sh` file name PERMENANTLY!!:
## ONLY USE THIS IF YOU NEED TO REMOVE ALL THE SAME TYPE OF JOBS!!
## Don't forget to change the lpcschedd number and also the file name prefix (until & including `_n`).
condor_rm -name lpcschedd5 -constraint "Owner==\"$USER\" && regexp(\".*QCD_Bin-PT-80to120_TuneCP5_13p6TeV_pythia8_NANOAODSIM_n[0-9]+\\.sh$\", Cmd)"
## Shows a detailed match diagnosis for the job, explaining why it is not running
## e.g., requirement mismatch, memory, site constraints.
condor_q -better-analyze <jobID>Caution
Run this command inside the dijetCondor directory:
cd $CMSSW_BASE/src/DijetScoutingRun3Analyzer/dijetCondor
# Data (Scouting)
python3 check_dataset_entries.py /ScoutingPFRun3/Run2024H-ScoutNano-v1/NANOAOD --workers 8 --backend root
# Data (Monitoring)
python3 check_dataset_entries.py /ScoutingPFMonitor/Run2024H-PromptReco-v1/NANOAOD --workers 8 --backend root
# QCD MC
python3 check_dataset_entries.py /QCD_Bin-PT-80to120_TuneCP5_13p6TeV_pythia8/RunIII2024Summer24NanoAOD-140X_mcRun3_2024_realistic_v26-v2/NANOAODSIM --workers 8 --backend rootTip
This script opens all ROOT files in the given DAS dataset, counts the total number of entries in the Events tree, and compares the summed value with DAS nevents.
--backend root is recommended on LPC/CMSSW setups.
--workers controls the parallel file checks, and --tree can be used if the input tree name is different from Events.
Important
Need a Fermilab (LPC) computing account in order to request and use this sytem.
If you're working on lxplus, skip this part!
LPC Condor resources are limited and might take very long due to the user-priority and high-demand. CMS Connect provides a Tier3-like environment for condor jobs and allow to use all resources available in the CMS Global Pool.
Register to ci-connect
Caution
ci-connect is the new platform and cilogon is deprecated.
Follow the Twiki page instruction for SSH Key steps but use ci-connect page instead of cilogon.
Allow system a few hours after uploading the SSH Key to your ci-connect profile.
Warning
The following steps can only be used once the system recognizes your new ssh-key.
ssh <username>@login.uscms.orgNote
You might need to delete the old ones in ~/.globus folder (if exist)!
copy_certificatesvoms-proxy-init -voms cmsWarning
Important: Set a default project for your account with the following commands to be able to submit any Condor jobs:
mkdir ~/.ciconnect
echo "cms.org.cms" > ~/.ciconnect/defaultproject