Full Configuration Example
Below is a complete example configuration with anotations for some sections:
#The "DIRAC" section contains general parameters needed in most installation types.
DIRAC
{
#The name of the Virtual Organization of the installation User Community.
#The option is defined in a single VO installation.
#VirtualOrganization = myVO
#The name of the DIRAC installation Setup. This option is defined in the client
#installations to define which subset of DIRAC Systems the client will work with.
Setup = mySetup
#The list of extensions to the Core DIRAC software used by the given installation
#Extensions = WebApp
#The Configuration subsection defines several options to discover and use the configuration data
Configuration
{
#This option defines a list of configuration servers, both master and slaves,
#from which clients can obtain the configuration data.
Servers = https://server:8443/Configuration/Server
#Servers +=
#The URL of the Master Configuration Server.
#This server is used for updating the Configuration Service
MasterServer = https://server:8443/Configuration/Server
#Enables automatic merging of the modifications done in parallel by several clients.
#Takes a boolean value. By default false.
#EnableAutoMerge = false
#This subsection is used to configure the Configuration Servers attributes.
#It should not edited by hand since it is upated by the Master Configuration Server
#to reflect the current situation of the system. Takes a boolean value. By default true.
#AutoPublish = true
#Name of Configuration file
Name = Dirac-Prod
}
#Set propagation time, by default 300 seconds.
#PropagationTime = 300
#How many time the secondary servers are going to refresh configuration from master.
#Expressed as Integer and seconds as unit. By default 300.
#RefreshTime = 300
#Set slaves grace time in a seconds. By default 600.
#SlavesGraceTime = 600
#CS configuration version used by DIRAC services as indicator when they need to reload the
#configuration. Expressed using date format. By default 0.
#Version = 2011-02-22 15:17:41.811223
#This subsection defines several options related to the DIRAC security framework.
#WARNING: This section should only appear in the local dirac.cfg file of each installation,
#never in the central configuration.
Security
{
#Flag to use server certificates and not user proxies.
#This is typically true for the server installations. By default false.
#UseServerCertificate = true
#Flag to skip the server identity by the client.
#The flag is usually defined in the client installations. By default false.
#SkipCAChecks = false
#Path where the host certificate is located on the server.
#If not specified, DIRAC will try to find it.
#CertFile = /opt/dirac/etc/grid-security/hostcert.pem
#Path where the host key is located on the server.
#If not specified, DIRAC will try to find it.
#KeyFile = /opt/dirac/etc/grid-security/hostkey.pem
#Flag to use access tokens and not user proxies. This is typically false.
#UseTokens = true
#Section that describes OAuth 2.0 authorization settings and metadata,
#it is required to interact with the DIRAC Authorization Server.
#Please see https://datatracker.ietf.org/doc/html/rfc8414 for more details.
Authorization
{
#The authorization server's issuer identifier,
#which is a URL that uses the "https" scheme and has no query or fragment components.
#Please see https://datatracker.ietf.org/doc/html/rfc8414section-2.
#This option must be defined in the client installations and on the DIRAC Authorization Server host.
#issuer = https://server/auth
#Section that describe DIRAC Authorization Server OAuth 2.0 clients metadata.
Clients
{
}
}
}
#Subsection name is a client name. Options are the client metadata,
#please see https://datatracker.ietf.org/doc/html/rfc7591section-2.
#MyApp
#{
#client_id = MY_CLIENT_ID
#client_secret = MY_CLIENT_SECRET
#scope = supported scopes separated by a space
#response_types = device,
#grant_types = refresh_token,
#}
#The subsection defines the names of different DIRAC Setups.
Setups
{
#For each Setup known to the installation, there must be a subsection with the appropriate name.
#In each subsection of the Setup section the names of corresponding system instances are defined.
#In the example below "Production" instances of the Configuration
#and Framework systems are defined as part of the "Dirac-Production" setup.
Dirac-Production
{
#Each option represents a DIRAC System available in the Setup
#and the Value is the instance of System that is used in that setup.
#For instance, since the Configuration is unique for the whole installation,
#all setups should have the same instance for the Configuration systems.
Configuration = Production
Framework = Production
}
}
}
#This part contains anything related to DiracX
DiracX
{
#The URL of the DIRAC Server
URL = https://diracx.invalid:8000
#A key used to have priviledged interactions with diracx. see
LegacyExchangeApiKey = diracx:legacy:InsecureChangeMe
#List of VOs which should not use DiracX via the legacy compatibility mechanism
DisabledVOs = dteam
DisabledVOs += cta
}
#Registry section:
#Sections to register VOs, groups, users and hosts
#https://dirac.readthedocs.org/en/latest/AdministratorGuide/UserManagement.html
Registry
{
#Registry options:
#Default user group to be used:
DefaultGroup = lhcb_user
#Querantine user group is usually to be used in case you want to set
#users in groups by hand as a "punishment" for a certain period of time:
QuarantineGroup = lowPriority_user
#Default proxy time expressed in seconds:
DefaultProxyTime = 4000
#Trusted hosts section, subsections represents host name of the DIRAC secondary servers
Hosts
{
dirac.host.com
{
#Host distinguish name obtained from host certificate
DN = /O=MyOrg/OU=Unity/CN=dirac.host.com
#Properties associated with the host
Properties = JobAdministrator
Properties += FullDelegation
Properties += Operator
Properties += CSAdministrator
Properties += ProductionManagement
Properties += AlarmsManagement
Properties += ProxyManagement
Properties += TrustedHost
}
}
#VOs:
#DIRAC VOs section, subsections represents name of the DIRAC VO or alias name of the real VOMS VO
VO
{
#It is not mandatory for the DIRAC VO to have the same name as the corresponding VOMS VO
lhcb
{
#VO administrator user name, that also MUST be registered(/Registry/Users section)
VOAdmin = lhcbadmin
#VO administrator group used for querying VOMS server.
#If not specified, the VO "DefaultGroup" will be used
VOAdminGroup = lhcb_admin
#Real VOMS VO name, if this VO is associated with VOMS VO
VOMSName = lhcb
#Registered identity provider associated with VO
IdProvider = CheckIn
#Section to describe all the VOMS servers that can be used with the given VOMS VO
VOMSServers
{
#The host name of the VOMS server
cclcgvomsli01.in2p3.fr
{
#DN of the VOMS server certificate
DN = /O=GRID-FR/C=FR/O=CNRS/OU=CC-IN2P3/CN=cclcgvomsli01.in2p3.fr
#The VOMS server port
Port = 15003
#CA that issued the VOMS server certificate
CA = /C=FR/O=CNRS/CN=GRID2-FR
}
}
}
}
#Groups:
#DIRAC groups section, subsections represents the name of the group
Groups
{
#Group for the common user
lhcb_user
{
#DIRAC users logins than belongs to the group
Users = lhcbuser1
#Group properties(set permissions of the group users)
Properties = NormalUser # Normal user operations
#Permission to download proxy with this group, by default: True
DownloadableProxy = False
#Role of the users in the VO
VOMSRole = /lhcb
#Scope associated with a role of the user in the VO
IdPRole = some_special_scope
#Virtual organization associated with the group
VOMSVO = lhcb
#Just for normal users:
JobShare = 200
#Controls automatic Proxy upload:
AutoUploadProxy = True
#Controls automatic Proxy upload for Pilot groups:
AutoUploadPilotProxy = True
#Controls automatic addition of VOMS extension:
AutoAddVOMS = True
}
#Group to submit pilot jobs
lhcb_pilot
{
Properties = GenericPilot # Generic pilot
Properties += LimitedDelegation # Allow getting only limited proxies (ie. pilots)
Properties += Pilot # Private pilot
}
#Admin group
lhcb_admin
{
Properties = ServiceAdministrator # DIRAC Service Administrator
Properties += CSAdministrator # possibility to edit the Configuration Service
Properties += JobAdministrator # Job Administrator can manipulate everybody's jobs
Properties += FullDelegation # Allow getting full delegated proxies
Properties += ProxyManagement # Allow managing proxies
Properties += Operator # Operator
}
}
#Users:
#DIRAC users section, subsections represents the name of the user
Users
{
lhcbuser1
{
#Distinguish name obtained from user certificate (Mandatory)
DN = /O=My organisation/C=FR/OU=Unit/CN=My Name
#User e-mail (Mandatory)
Email = my@email.com
#Cellular phone number
mobile = +030621555555
#Quota assigned to the user. Expressed in MBs.
Quota = 300
#This subsection describes the properties associated with each DN attribute (optional)
DNProperties
{
#Arbitrary section name
DNSubsection
{
#Distinguish name obtained from user certificate (Mandatory)
DN = /O=My organisation/C=FR/OU=Unit/CN=My Name
#Proxy provider that can generate the proxy certificate with DN in DN attribute.
ProxyProviders = MY_DIRACCA
}
}
}
}
}
Systems
{
DataManagementSystem
{
Agents
{
#http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/DataManagement/fts3.html#fts3agent
FTS3Agent
{
OperationBulkSize = 20 # How many Operation we will treat in one loop
JobBulkSize = 20 # How many Job we will monitor in one loop
MaxFilesPerJob = 100 # Max number of files to go in a single job
MaxAttemptsPerFile = 256 # Max number of attempt per file
DeleteGraceDays = 180 # days before removing jobs
DeleteLimitPerCycle = 100 # Max number of deletes per cycle
KickAssignedHours = 1 # hours before kicking jobs with old assignment tag
KickLimitPerCycle = 100 # Max number of kicks per cycle
}
}
Services
{
#http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/DataManagement/dfc.html#filecataloghandler
FileCatalogHandler
{
Port = 9197
DatasetManager = DatasetManager
DefaultUmask = 0775
DirectoryManager = DirectoryLevelTree
DirectoryMetadata = DirectoryMetadata
FileManager = FileManager
FileMetadata = FileMetadata
GlobalReadAccess = True
LFNPFNConvention = Strong
ResolvePFN = True
SecurityManager = NoSecurityManager
SEManager = SEManagerDB
UniqueGUID = False
UserGroupManager = UserAndGroupManagerDB
ValidFileStatus = [AprioriGoodTrashRemovingProbing]
ValidReplicaStatus = [AprioriGoodTrashRemovingProbing]
VisibleFileStatus = [AprioriGood]
VisibleReplicaStatus = [AprioriGood]
}
#http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/DataManagement/fts.html#ftsmanager
FTS3ManagerHandler
{
#No specific configuration
Port = 9193
}
}
Databases
{
#http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/DataManagement/dfc.html#filecatalogdb
FileCatalogDB
{
#No specific configuration
DBName = FileCatalogDB
}
FTS3DB
{
#No specific configuration
DBName = FTS3DB
}
}
}
Framework
{
#END
Services
{
#END
BundleDelivery
{
Protocol = https
Port = 9158
Authorization
{
Default = authenticated
FileTransfer
{
Default = authenticated
}
}
}
#END
ComponentMonitoring
{
Port = 9190
#This enables ES monitoring only for this particular service.
EnableActivityMonitoring = no
Authorization
{
Default = ServiceAdministrator
componentExists = authenticated
getComponents = authenticated
hostExists = authenticated
getHosts = authenticated
installationExists = authenticated
getInstallations = authenticated
updateLog = Operator
}
}
Gateway
{
Port = 9159
}
SystemAdministrator
{
Port = 9162
Authorization
{
Default = ServiceAdministrator
storeHostInfo = Operator
}
}
#BEGIN TornadoTokenManager:
#Section to describe TokenManager system
TornadoTokenManager
{
Protocol = https
#Description of rules for access to methods
Authorization
{
#Settings by default:
Default = authenticated
getUsersTokensInfo = ProxyManagement
}
}
#END
#BEGIN ProxyManager:
#Section to describe ProxyManager system
#https://dirac.readthedocs.org/en/latest/AdministratorGuide/Systems/Framework/ProxyManager/index.html
ProxyManager
{
Port = 9152
MaxThreads = 100
#Email to use as a sender for the expiration reminder
MailFrom = "proxymanager@diracgrid.org"
#Description of rules for access to methods
Authorization
{
Default = authenticated
getProxy = FullDelegation
getProxy += LimitedDelegation
getProxy += PrivateLimitedDelegation
getVOMSProxy = FullDelegation
getVOMSProxy += LimitedDelegation
getVOMSProxy += PrivateLimitedDelegation
getLogContents = ProxyManagement
}
}
#END
#BEGIN TornadoProxyManager:
#Section to describe ProxyManager system
#https://dirac.readthedocs.org/en/latest/AdministratorGuide/Systems/Framework/ProxyManager/index.html
TornadoProxyManager
{
Protocol = https
#Email to use as a sender for the expiration reminder
MailFrom = "proxymanager@diracgrid.org"
#Description of rules for access to methods
Authorization
{
Default = authenticated
getProxy = FullDelegation
getProxy += LimitedDelegation
getProxy += PrivateLimitedDelegation
getVOMSProxy = FullDelegation
getVOMSProxy += LimitedDelegation
getVOMSProxy += PrivateLimitedDelegation
getLogContents = ProxyManagement
}
}
#END
SecurityLogging
{
Port = 9153
#Directory where log info is kept
DataLocation = data/securityLog
Authorization
{
Default = authenticated
}
}
UserProfileManager
{
Port = 9155
Authorization
{
Default = authenticated
}
}
#BEGIN TornadoUserProfileManager:
#Section to describe UserProfileManager service
TornadoUserProfileManager
{
Protocol = https
Authorization
{
Default = authenticated
}
}
Notification
{
Port = 9154
Authorization
{
sendMail = authenticated
}
}
#BEGIN TornadoNotification:
#Section to describe Notification service
TornadoNotification
{
Protocol = https
Authorization
{
sendMail = authenticated
}
}
#BEGIN TornadoComponentMonitoring:
#Section to describe ComponentMonitoring service
TornadoComponentMonitoring
{
Protocol = https
Authorization
{
Default = ServiceAdministrator
componentExists = authenticated
getComponents = authenticated
hostExists = authenticated
getHosts = authenticated
installationExists = authenticated
getInstallations = authenticated
updateLog = Operator
}
}
}
APIs
{
#BEGIN Auth:
#Section to describe RESTful API for DIRAC Authorization Server(AS)
Auth
{
Port = 8000
#Allow download personal proxy. By default: True
allowProxyDownload = True
}
}
#END
Agents
{
#BEGIN ProxyRenewalAgent
ProxyRenewalAgent
{
PollingTime = 900
#Email to use as a sender for the expiration reminder
MailFrom = proxymanager@diracgrid.org
MinimumLifeTime = 3600
RenewedLifeTime = 54000
}
#END
#BEGIN ComponentSupervisionAgent
ComponentSupervisionAgent
{
#Time in seconds between start of cycles
PollingTime = 600
#Overall enable or disable
EnableFlag = False
#Email addresses receiving notifications
MailTo =
#Sender email address
MailFrom =
#If True automatically restart stuck agents
RestartAgents = False
#if True automatically restart stuck services
RestartServices = False
#if True automatically restart stuck executors
RestartExecutors = False
#if True automatically start or stop components based on host configuration
ControlComponents = False
#if True automatically add or remove service URLs
CommitURLs = False
#list of pattern in instances to disable restart for them
DoNotRestartInstancePattern = RequestExecutingAgent
}
}
}
RequestManagementSystem
{
Agents
{
#http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/RequestManagement/rmsComponents.html#cleanreqdbagent
CleanReqDBAgent
{
DeleteGraceDays = 60 # Delay after which Requests are removed
DeleteLimit = 100 # Maximum number of Requests to remove per cycle
DeleteFailed = False # Whether to delete also Failed request
KickGraceHours = 1 # After how long we should kick the Requests in `Assigned`
KickLimit = 10000 # Maximum number of requests kicked by cycle
}
#http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/RequestManagement/rmsComponents.html#requestexecutingagent
RequestExecutingAgent
{
BulkRequest = 0
MinProcess = 1
MaxProcess = 8
ProcessPoolQueueSize = 25
ProcessPoolTimeout = 900
ProcessTaskTimeout = 900
ProcessPoolSleep = 4
RequestsPerCycle = 50
#Define the different Operation types
#see http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/RequestManagement/rmsObjects.html#operation-types
OperationHandlers
{
DummyOperation
{
#These parameters can be defined for all handlers
#The location of the python file, without .py, is mandatory
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/DummyHandler # Mandatory
LogLevel = DEBUG # self explanatory
MaxAttemts = 256 # Maximum attempts per file
TimeOut = 300 # Timeout in seconds of the operation
TimeOutPerFile = 40 # Additional delay per file
}
ForwardDISET
{
Location = DIRAC/RequestManagementSystem/Agent/RequestOperations/ForwardDISET
}
MoveReplica
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/MoveReplica
}
PutAndRegister
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/PutAndRegister
}
RegisterFile
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/RegisterFile
}
RegisterReplica
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/RegisterReplica
}
RemoveFile
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/RemoveFile
}
RemoveReplica
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/RemoveReplica
}
ReplicateAndRegister
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/ReplicateAndRegister
FTSMode = True # If True
FTSMode += will use FTS to transfer files
FTSBannedGroups = lhcb_user # list of groups for which not to use FTS
}
SetFileStatus
{
Location = DIRAC/TransformationSystem/Agent/RequestOperations/SetFileStatus
}
}
}
}
Databases
{
#http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/RequestManagement/rmsComponents.html#requestdb
RequestDB
{
#No specific configuration
DBName = RequestDB
}
}
Services
{
#http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/RequestManagement/rmsComponents.html#reqmanager
ReqManager
{
Port = 9140
constantRequestDelay = 0 # Constant delay when retrying a request
}
#http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/RequestManagement/rmsComponents.html#reqproxy
ReqProxy
{
Port = 9161
}
}
URLs
{
#Yes.... it is ReqProxyURLs, and not ReqProxy...
#http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/RequestManagement/rmsComponents.html#reqproxy
ReqProxyURLs = dips://server1:9161/RequestManagement/ReqProxy
ReqProxyURLs += dips://server2:9161/RequestManagement/ReqProxy
}
}
TransformationSystem
{
Agents
{
#BEGIN TransformationCleaningAgent
TransformationCleaningAgent
{
#MetaData key to use to identify output data
TransfIDMeta = TransformationID
#Location of the OutputData, if the OutputDirectories parameter is not set for
#transformations only 'MetadataCatalog has to be used
DirectoryLocations = TransformationDB
DirectoryLocations += MetadataCatalog
#Enable or disable, default enabled
EnableFlag = True
#How many days to wait before archiving transformations
ArchiveAfter = 7
#Shifter to use for removal operations, default is empty and
#using the transformation owner for cleanup
shifterProxy =
#Which transformation types to clean
#If not filled, transformation types are taken from
#Operations/Transformations/DataManipulation
#and Operations/Transformations/DataProcessing
TransformationTypes =
#Time between cycles in seconds
PollingTime = 3600
}
}
}
#END
WorkloadManagementSystem
{
Databases
{
JobParametersDB
{
#Host of OpenSearch instance
Host = host.some.where
#index name (default is "job_parameters")
index_name = a_different_name
}
}
JobWrapper
{
#Minimum output buffer requested for running jobs
MinOutputDataBufferGB = 5
}
}
Accounting
{
Services
{
#BEGIN DataStore
DataStore
{
Port = 9133
#Run compaction, has to be True for Master, False for others
RunBucketing = True
Authorization
{
Default = authenticated
compactDB = ServiceAdministrator
deleteType = ServiceAdministrator
registerType = ServiceAdministrator
setBucketsLength = ServiceAdministrator
regenerateBuckets = ServiceAdministrator
}
}
#END
#BEGIN ReportGenerator
ReportGenerator
{
Port = 9134
#folder relative to instance path, where data is stored
DataLocation = data/accountingGraphs
Authorization
{
Default = authenticated
FileTransfer
{
Default = authenticated
}
}
}
}
#END
Agents
{
#BEGIN NetworkAgent
NetworkAgent
{
MaxCycles = 0
PollingTime = 60
#URI of the MQ of the perfSONAR information
MessageQueueURI =
#time how long objects are kept in the buffer if they cannot be written to the DB
BufferTimeout = 3600
}
}
}
Configuration
{
Services
{
#BEGIN Server
#This is the master CS, which is exposed via Tornado but at port 9135
Server
{
HandlerPath = DIRAC/ConfigurationSystem/Service/TornadoConfigurationHandler.py
Port = 9135
#Subsection to configure authorization over the service
Authorization
{
#Default authorization
Default = authenticated
#Define who can commit new configuration
commitNewData = CSAdministrator
#Define who can roll back the configuration to a previous version
rollbackToVersion = CSAdministrator
#Define who can get version contents
getVersionContents = ServiceAdministrator
getVersionContents += CSAdministrator
forceGlobalConfigurationUpdate = CSAdministrator
}
}
#END
#BEGIN TornadoServer
#This is the slave CS, exposed via standard Tornado
TornadoConfiguration
{
Protocol = https
#Subsection to configure authorization over the service
Authorization
{
#Default authorization
Default = authenticated
#Define who can commit new configuration
commitNewData = CSAdministrator
#Define who can roll back the configuration to a previous version
rollbackToVersion = CSAdministrator
#Define who can get version contents
getVersionContents = ServiceAdministrator
getVersionContents += CSAdministrator
forceGlobalConfigurationUpdate = CSAdministrator
}
}
}
#END
Agents
{
#BEGIN Bdii2CSAgent
Bdii2CSAgent
{
#Time between cycles in seconds
PollingTime = 14400
BannedCEs =
#Only treat these sites
SelectedSites =
#Process Computing Elements
ProcessCEs = yes
#Mail Notifications options
MailTo =
MailFrom =
VirtualOrganization =
#Flag to turn to False if you want this agent to write in the CS
DryRun = True
#Host to query, must include port
Host = cclcgtopbdii01.in2p3.fr:2170
#If True, add single core queues for each Multi Core Queue and set
#RequiredTag=MultiProcessor for those
InjectSingleCoreQueues = False
}
#END
#BEGIN VOMS2CSAgent
VOMS2CSAgent
{
#Time between cycles in seconds
PollingTime = 14400
MailFrom = noreply@dirac.system
#If users will be added automatically
AutoAddUsers = True
#If users will be modified automatically
AutoModifyUsers = True
#Users no longer registered in VOMS are automatically deleted from DIRAC
AutoDeleteUsers = True
#If suspended status is lifted, if lifted in VOMS
AutoLiftSuspendedStatus = True
#Detailed report on users per group send to the VO administrator
DetailedReport = True
#Automatically create user home directory in the File Catalog
MakeHomeDirectory = False
#List of VO names
VO = Any
#Flag to turn to False if you want this agent to write in the CS (more granularity within other options)
DryRun = True
#Name of the plugin to validate or expand user's info. See :py:mod:`DIRAC.ConfigurationSystem.Client.SyncPlugins.DummySyncPlugin`
SyncPluginName =
#If set to true, will query the VO IAM server for the list of user, and print
#a comparison of what is with VOMS
CompareWithIAM = False
#If set to true, will only query IAM and return the list of users from there
UseIAM = False
#If set to true only users with a nickname attribute defined in the IAM are created in DIRAC
ForceNickname = False
}
#END
#BEGIN GOCDB2CSAgent
GOCDB2CSAgent
{
#Time between cycles in seconds
PollingTime = 14400
#Flag to turn to False if you want this agent to write in the CS
DryRun = True
#if False, disable the updating of perfSONAR endpoints from GOCDB
UpdatePerfSONARS = True
}
#END
#BEGIN RucioSynchronizerAgent
RucioSynchronizerAgent
{
#Time between cycles in seconds
PollingTime = 120
}
}
}
DataManagement
{
Services
{
DataIntegrity
{
Port = 9150
Authorization
{
Default = authenticated
}
}
#BEGIN TornadoDataIntegrity
TornadoDataIntegrity
{
Protocol = https
Authorization
{
Default = authenticated
}
}
#END
#BEGIN FTS3Manager
FTS3Manager
{
Port = 9193
Authorization
{
Default = authenticated
}
}
#END
#BEGIN TornadoFTS3Manager
TornadoFTS3Manager
{
Protocol = https
Authorization
{
Default = authenticated
}
}
#END
FileCatalog
{
Port = 9197
UserGroupManager = UserAndGroupManagerDB
SEManager = SEManagerDB
SecurityManager = NoSecurityManager
DirectoryManager = DirectoryLevelTree
FileManager = FileManager
UniqueGUID = False
GlobalReadAccess = True
LFNPFNConvention = Strong
ResolvePFN = True
DefaultUmask = 509
VisibleStatus = AprioriGood
Authorization
{
Default = authenticated
}
}
#Caution: LHCb specific managers
TornadoFileCatalog
{
Protocol = https
UserGroupManager = UserAndGroupManagerDB
SEManager = SEManagerDB
SecurityManager = VOMSSecurityManager
DirectoryManager = DirectoryClosure
FileManager = FileManagerPs
UniqueGUID = True
GlobalReadAccess = True
LFNPFNConvention = Strong
ResolvePFN = True
DefaultUmask = 509
VisibleStatus = AprioriGood
Authorization
{
Default = authenticated
}
}
#BEGIN StorageElement
StorageElement
{
#Local path where the data is stored
BasePath = storageElement
#Port exposed
Port = 9148
#Maximum size in MB you allow to store (0 meaning no limits)
MaxStorageSize = 0
Authorization
{
Default = authenticated
FileTransfer
{
Default = authenticated
}
}
}
#END
#BEGIN S3Gateway
S3Gateway
{
Port = 9169
Authorization
{
Default = authenticated
}
}
#END
#BEGIN TornadoS3Gateway
TornadoS3Gateway
{
Protocol = https
Authorization
{
Default = authenticated
}
}
}
#END
Agents
{
#BEGIN FTS3Agent
FTS3Agent
{
PollingTime = 120
MaxThreads = 10
#How many Operation we will treat in one loop
OperationBulkSize = 20
#How many Job we will monitor in one loop
JobBulkSize = 20
#split jobBulkSize in several chunks
#Bigger numbers (like 100) are efficient when there's a single agent
#When there are multiple agents, it may slow down the overall because
#of lock and race conditions
#(This number should of course be smaller or equal than JobBulkSize)
JobMonitoringBatchSize = 20
#Max number of files to go in a single job
MaxFilesPerJob = 100
#Max number of attempt per file
MaxAttemptsPerFile = 256
#days before removing jobs
DeleteGraceDays = 180
#Max number of deletes per cycle
DeleteLimitPerCycle = 100
#hours before kicking jobs with old assignment tag
KickAssignedHours = 1
#Max number of kicks per cycle
KickLimitPerCycle = 100
#Lifetime in sec of the Proxy we download to delegate to FTS3 (default 36h)
ProxyLifetime = 129600
#Whether we use tokens to submit jobs to FTS3
#VERY EXPERIMENTAL
UseTokens = False
}
}
}
Monitoring
{
Services
{
#BEGIN Monitoring
Monitoring
{
Port = 9137
Authorization
{
Default = authenticated
FileTransfer
{
Default = authenticated
}
}
}
#END
#BEGIN TornadoMonitoring
TornadoMonitoring
{
Protocol = https
Authorization
{
Default = authenticated
FileTransfer
{
Default = authenticated
}
}
}
}
}
Production
{
Services
{
ProductionManager
{
Port = 9180
Authorization
{
Default = authenticated
}
}
#BEGIN TornadoProductionManager
TornadoProductionManager
{
Protocol = https
Authorization
{
Default = authenticated
}
}
}
}
RequestManagement
{
Services
{
#BEGIN ReqManager
ReqManager
{
Port = 9140
#If > 0, delay retry for this many minutes
ConstantRequestDelay = 0
Authorization
{
Default = authenticated
}
}
#END
#BEGIN TornadoReqManager
TornadoReqManager
{
Protocol = https
#If > 0, delay retry for this many minutes
ConstantRequestDelay = 0
Authorization
{
Default = authenticated
}
}
#END
#BEGIN ReqProxy
ReqProxy
{
Port = 9161
#Number of request to sweep at once
SweepSize = 10
Authorization
{
Default = authenticated
}
}
}
#END
Agents
{
#BEGIN RequestExecutingAgent
RequestExecutingAgent
{
PollingTime = 60
#number of Requests to execute per cycle
RequestsPerCycle = 100
#minimum number of workers process in the ProcessPool
MinProcess = 20
#maximum number of workers process in the ProcessPool; recommended to set it to the same value as MinProcess
MaxProcess = 20
#queue depth of the ProcessPool
ProcessPoolQueueSize = 20
#timeout for the ProcessPool finalization
ProcessPoolTimeout = 900
#sleep time before retrying to get a free slot in the ProcessPool
ProcessPoolSleep = 5
#If a positive integer n is given, we fetch n requests at once from the DB. Otherwise, one by one
BulkRequest = 0
OperationHandlers
{
ForwardDISET
{
Location = DIRAC/RequestManagementSystem/Agent/RequestOperations/ForwardDISET
LogLevel = INFO
MaxAttempts = 256
TimeOut = 120
}
ReplicateAndRegister
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/ReplicateAndRegister
FTSMode = False
FTSBannedGroups = dirac_user
FTSBannedGroups += lhcb_user
LogLevel = INFO
MaxAttempts = 256
TimeOutPerFile = 600
}
PutAndRegister
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/PutAndRegister
LogLevel = INFO
MaxAttempts = 256
TimeOutPerFile = 600
}
RegisterReplica
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/RegisterReplica
LogLevel = INFO
MaxAttempts = 256
TimeOutPerFile = 120
}
RemoveReplica
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/RemoveReplica
LogLevel = INFO
MaxAttempts = 256
TimeOutPerFile = 120
}
RemoveFile
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/RemoveFile
LogLevel = INFO
MaxAttempts = 256
TimeOutPerFile = 120
}
RegisterFile
{
Location = DIRAC/DataManagementSystem/Agent/RequestOperations/RegisterFile
LogLevel = INFO
MaxAttempts = 256
TimeOutPerFile = 120
}
SetFileStatus
{
Location = DIRAC/TransformationSystem/Agent/RequestOperations/SetFileStatus
LogLevel = INFO
MaxAttempts = 256
TimeOutPerFile = 120
}
}
}
#END
#BEGIN CleanReqDBAgent
CleanReqDBAgent
{
PollingTime = 60
ControlDirectory = control/RequestManagement/CleanReqDBAgent
#How many days, until finished requests are deleted
DeleteGraceDays = 60
#How many requests are deleted per cycle
DeleteLimit = 100
#If failed requests are deleted
DeleteFailed = False
#How many hours a request can stay assigned
KickGraceHours = 1
#How many requests are kicked per cycle
KickLimit = 10000
#Number of Days before a Request is cancelled,
#regardless of State
#if set to 0 (default) Requests are never cancelled
CancelGraceDays = 0
}
}
}
ResourceStatus
{
Services
{
ResourceStatus
{
Port = 9160
Authorization
{
Default = SiteManager
select = all
}
}
ResourceManagement
{
Port = 9172
Authorization
{
Default = SiteManager
select = all
}
}
Publisher
{
Port = 9165
Authorization
{
Default = Authenticated
}
}
TornadoResourceStatus
{
Protocol = https
Authorization
{
Default = SiteManager
select = all
}
}
TornadoResourceManagement
{
Protocol = https
Authorization
{
Default = SiteManager
select = all
}
}
TornadoPublisher
{
Protocol = https
Authorization
{
Default = Authenticated
}
}
}
Agents
{
#BEGIN SummarizeLogsAgent
SummarizeLogsAgent
{
#Time between cycles in seconds
PollingTime = 300
#Months of history to keep
Months = 36
}
#END
#BEGIN ElementInspectorAgent
ElementInspectorAgent
{
#Time between cycles in seconds
PollingTime = 300
#Maximum number of threads used by the agent
maxNumberOfThreads = 15
#Type of element that this agent will run on (Resource or Site)
elementType = Resource
}
#END
#BEGIN RucioRSSAgent
RucioRSSAgent
{
#Time between cycles in seconds
PollingTime = 120
}
#END
#BEGIN SiteInspectorAgent
SiteInspectorAgent
{
#Time between cycles in seconds
PollingTime = 300
#Maximum number of threads used by the agent
maxNumberOfThreads = 15
}
#END
#BEGIN CacheFeederAgent
CacheFeederAgent
{
#Time between cycles in seconds
PollingTime = 900
#Shifter to use by the commands invoked
shifterProxy = DataManager
}
#END
#BEGIN TokenAgent
TokenAgent
{
#Time between cycles in seconds
PollingTime = 3600
#hours to notify the owner of the token in advance to the token expiration
notifyHours = 12
#admin e-mail to where to notify about expiring tokens (on top of existing notifications to tokwn owners)
adminMail =
}
#END
#BEGIN EmailAgent
EmailAgent
{
#Time between cycles in seconds
PollingTime = 1800
}
}
}
StorageManagement
{
Services
{
StorageManager
{
Port = 9149
Authorization
{
Default = authenticated
}
}
#BEGIN TornadoStorageManager
TornadoStorageManager
{
Protocol = https
Authorization
{
Default = authenticated
}
}
}
#END
Agents
{
#BEGIN StageMonitorAgent
StageMonitorAgent
{
PollingTime = 120
#only use these Plugins to query StorageElements. All if empty
StoragePlugins =
}
#END
StageRequestAgent
{
PollingTime = 120
}
RequestPreparationAgent
{
PollingTime = 120
}
RequestFinalizationAgent
{
PollingTime = 120
}
}
}
Transformation
{
Services
{
TransformationManager
{
Port = 9131
Authorization
{
Default = authenticated
}
}
TornadoTransformationManager
{
Protocol = https
Authorization
{
Default = authenticated
}
}
}
Agents
{
#BEGIN InputDataAgent
InputDataAgent
{
PollingTime = 120
FullUpdatePeriod = 86400
RefreshOnly = False
#If True, query the FileCatalog as the owner of the transformation, needed for MultiVO*MetaData filecatalogs
MultiVO = False
}
#END
#BEGIN MCExtensionAgent
MCExtensionAgent
{
PollingTime = 120
}
#END
#BEGIN RequestTaskAgent
RequestTaskAgent
{
#Use a dedicated proxy to submit requests to the RMS
shifterProxy =
#Use delegated credentials. Use this instead of the shifterProxy option (New in v6r20p5)
ShifterCredentials =
#Transformation types to be taken into account by the agent. If the option is empty,
#the value is taken from *Operations/Transformations/DataManipulation*
#with a default of "Replication, Removal"
TransType =
#Location of the transformation plugins
PluginLocation = DIRAC.TransformationSystem.Client.TaskManagerPlugin
#maximum number of threads to use in this agent
maxNumberOfThreads = 15
#Give this option a value if the agent should submit Requests
SubmitTasks = yes
#Status of transformations for which to submit Requests
SubmitStatus = Active
SubmitStatus += Completing
#Number of tasks to submit in one execution cycle per transformation
TasksPerLoop = 50
#Give this option a value if the agent should update the status of tasks
MonitorTasks =
#Status of transformations for which to monitor tasks
UpdateTasksStatus = Active
UpdateTasksStatus += Completing
UpdateTasksStatus += Stopped
#Task statuses considered transient that should be monitored for updates
TaskUpdateStatus = Checking
TaskUpdateStatus += Deleted
TaskUpdateStatus += Killed
TaskUpdateStatus += Staging
TaskUpdateStatus += Stalled
TaskUpdateStatus += Matched
TaskUpdateStatus += Scheduled
TaskUpdateStatus += Rescheduled
TaskUpdateStatus += Completed
TaskUpdateStatus += Submitted
TaskUpdateStatus += Assigned
TaskUpdateStatus += Received
TaskUpdateStatus += Waiting
TaskUpdateStatus += Running
#Number of tasks to be updated in one call
TaskUpdateChunkSize = 0
#Give this option a value if the agent should update the status for files
MonitorFiles =
#Status of transformations for which to monitor Files
UpdateFilesStatus = Active
UpdateFilesStatus += Completing
UpdateFilesStatus += Stopped
#Give this option a value if the agent should check Reserved tasks
CheckReserved =
#Status of transformations for which to check reserved tasks
CheckReservedStatus = Active
CheckReservedStatus += Completing
CheckReservedStatus += Stopped
#Time between cycles in seconds
PollingTime = 120
}
#END
#BEGIN TransformationAgent
TransformationAgent
{
#Time between cycles in seconds
PollingTime = 120
}
#END
#BEGIN TransformationCleaningAgent
TransformationCleaningAgent
{
#MetaData key to use to identify output data
TransfIDMeta = TransformationID
#Location of the OutputData, if the OutputDirectories parameter is not set for
#transformations only 'MetadataCatalog has to be used
DirectoryLocations = TransformationDB
DirectoryLocations += MetadataCatalog
#Enable or disable, default enabled
EnableFlag = True
#How many days to wait before archiving transformations
ArchiveAfter = 7
#Shifter to use for removal operations, default is empty and
#using the transformation owner for cleanup
shifterProxy =
#Which transformation types to clean
#If not filled, transformation types are taken from
#Operations/Transformations/DataManipulation
#and Operations/Transformations/DataProcessing
TransformationTypes =
#Time between cycles in seconds
PollingTime = 3600
}
#END
#BEGIN ValidateOutputDataAgent
ValidateOutputDataAgent
{
#Time between cycles in seconds
PollingTime = 120
}
#END
#BEGIN WorkflowTaskAgent
WorkflowTaskAgent
{
#Transformation types to be taken into account by the agent
TransType = MCSimulation
TransType += DataReconstruction
TransType += DataStripping
TransType += MCStripping
TransType += Merge
#Task statuses considered transient that should be monitored for updates
TaskUpdateStatus = Submitted
TaskUpdateStatus += Received
TaskUpdateStatus += Waiting
TaskUpdateStatus += Running
TaskUpdateStatus += Matched
TaskUpdateStatus += Completed
TaskUpdateStatus += Failed
#Status of transformations for which to monitor tasks
UpdateTasksStatus = Active
UpdateTasksStatus += Completing
UpdateTasksStatus += Stopped
#Number of tasks to be updated in one call
TaskUpdateChunkSize = 0
#Give this option a value if the agent should submit workflow tasks (Jobs)
SubmitTasks = yes
#Status of transformations for which to submit jobs to WMS
SubmitStatus = Active
SubmitStatus += Completing
#Number of tasks to submit in one execution cycle per transformation
TasksPerLoop = 50
#Use a dedicated proxy to submit jobs to the WMS
shifterProxy =
#Use delegated credentials. Use this instead of the shifterProxy option (New in v6r20p5)
ShifterCredentials =
#Give this option a value if the agent should check Reserved tasks
CheckReserved =
#Give this option a value if the agent should monitor tasks
MonitorTasks =
#Give this option a value if the agent should monitor files
MonitorFiles =
#Status of transformations for which to monitor Files
UpdateFilesStatus = Active
UpdateFilesStatus += Completing
UpdateFilesStatus += Stopped
#Status of transformations for which to check reserved tasks
CheckReservedStatus = Active
CheckReservedStatus += Completing
CheckReservedStatus += Stopped
#Location of the transformation plugins
PluginLocation = DIRAC.TransformationSystem.Client.TaskManagerPlugin
#maximum number of threads to use in this agent
maxNumberOfThreads = 15
#Time between cycles in seconds
PollingTime = 120
#Fill in this option if you want to activate bulk submission (for speed up)
BulkSubmission = false
}
#END
#BEGIN DataRecoveryAgent
DataRecoveryAgent
{
PollingTime = 3600
EnableFlag = False
MailTo =
MailFrom =
#List of TransformationIDs that will not be treated
TransformationsToIgnore =
#List of Transformation Statuses to treat
TransformationStatus = Active
TransformationStatus += Completing
#List of transformations that do not have input data, by default Operations/Transformation/ExtendableTransfTypes
TransformationsNoInput =
#List of transformations that do have input data, by default Operations/Transformation/DataProcessing (- ExtendableTransfTypes)
TransformationsWithInput =
#Print every N treated jobs to monitor progress
PrintEvery = 200
#Instead of obtaining the job information from the JobMonitor service, pick them from the JDL. This is slightly faster but requires the ProductionOutputData information to be in the JDL
JobInfoFromJDLOnly = False
}
}
}
WorkloadManagement
{
Services
{
JobManager
{
Port = 9132
MaxParametricJobs = 100
Authorization
{
Default = authenticated
}
}
#BEGIN TornadoJobManager
TornadoJobManager
{
Protocol = https
Authorization
{
Default = authenticated
}
}
#END
#BEGIN TornadoPilotLogging
TornadoPilotLogging
{
Protocol = https
Authorization
{
Default = authenticated
sendMessage = Operator
sendMessage += GenericPilot
getMetadata = Operator
getMetadata += TrustedHost
finaliseLogs = Operator
finaliseLogs += Pilot
finaliseLogs += GenericPilot
}
}
#END
#BEGIN JobMonitoring
JobMonitoring
{
Port = 9130
Authorization
{
Default = authenticated
}
}
#END
#BEGIN TornadoJobMonitoring
TornadoJobMonitoring
{
Protocol = https
Authorization
{
Default = authenticated
}
}
#END
JobStateUpdate
{
Port = 9136
Authorization
{
Default = authenticated
}
MaxThreads = 100
}
#BEGIN TornadoJobStateUpdate
TornadoJobStateUpdate
{
Protocol = https
Authorization
{
Default = authenticated
}
}
#END
#Parameters of the WMS Matcher service
Matcher
{
Port = 9170
MaxThreads = 20
Authorization
{
Default = authenticated
getActiveTaskQueues = JobAdministrator
}
}
#Parameters of the WMS Administrator service
WMSAdministrator
{
Port = 9145
Authorization
{
Default = Operator
getJobPilotOutput = authenticated
}
}
#BEGIN TornadoWMSAdministrator
TornadoWMSAdministrator
{
Protocol = https
Authorization
{
Default = Operator
getJobPilotOutput = authenticated
}
}
#END
#Parameters of the PilotManager service
PilotManager
{
Port = 9171
Authorization
{
Default = authenticated
}
}
#BEGIN SandboxStore
SandboxStore
{
Port = 9196
LocalSE = ProductionSandboxSE
MaxThreads = 200
MaxSandboxSizeMiB = 10
BasePath = /opt/dirac/storage/sandboxes
#If true, uploads the sandbox via diracx on an S3 storage
UseDiracXBackend = False
Authorization
{
Default = authenticated
FileTransfer
{
Default = authenticated
}
}
}
#END
#BEGIN TornadoSandboxStore
TornadoSandboxStore
{
Protocol = https
LocalSE = ProductionSandboxSE
MaxThreads = 200
MaxSandboxSizeMiB = 10
BasePath = /opt/dirac/storage/sandboxes
Authorization
{
Default = authenticated
FileTransfer
{
Default = authenticated
}
}
}
#END
OptimizationMind
{
Port = 9175
}
}
Agents
{
#BEGIN PilotSyncAgent
PilotSyncAgent
{
PollingTime = 600
#Directory where the files can be moved. If running on the WebApp, use /opt/dirac/webRoot/www/pilot
SaveDirectory =
#List of locations where to upload the pilot files. Can be https://some.where, or DIRAC SE names.
UploadLocations =
#Set to False (or No, or N) to exclude the master CS from the list of CS servers
IncludeMasterCS = True
}
#END
#BEGIN PilotStatusAgent
PilotStatusAgent
{
PollingTime = 300
#Flag enabling sending of the Pilot accounting info to the Accounting Service
PilotAccountingEnabled = yes
}
#END
#BEGIN PilotLoggingAgent
PilotLoggingAgent
{
PollingTime = 600
}
#END
JobAgent
{
PollingTime = 20
FillingModeFlag = true
StopOnApplicationFailure = true
StopAfterFailedMatches = 10
StopAfterHostFailures = 3
SubmissionDelay = 10
DefaultLogLevel = INFO
JobWrapperTemplate = DIRAC/WorkloadManagementSystem/JobWrapper/JobWrapperTemplate.py
}
#BEGIN StalledJobAgent
StalledJobAgent
{
StalledTimeHours = 2
FailedTimeHours = 6
PollingTime = 3600
MaxNumberOfThreads = 15
#List of sites for which we want to be more tolerant before declaring the job stalled
StalledJobsTolerantSites =
StalledJobsToleranceTime = 0
#List of sites for which we want to be Reschedule (instead of declaring Failed) the Stalled jobs
StalledJobsToRescheduleSites =
SubmittingTime = 300
MatchedTime = 7200
RescheduledTime = 600
Enable = True
}
#END
#BEGIN JobCleaningAgent
JobCleaningAgent
{
PollingTime = 3600
#Maximum number of jobs to be processed in one cycle
MaxJobsAtOnce = 500
#Maximum number of jobs to be processed in one cycle for HeartBeatLoggingInfo removal
MaxHBJobsAtOnce = 0
RemoveStatusDelay
{
#Number of days after which Done jobs are removed
Done = 7
#Number of days after which Killed jobs are removed
Killed = 7
#Number of days after which Failed jobs are removed
Failed = 7
#Number of days after which any jobs, irrespective of status is removed (-1 for disabling this feature)
Any = -1
}
RemoveStatusDelayHB
{
#Number of days after which HeartBeatLoggingInfo for Done jobs are removed, positive to enable
Done = -1
#Number of days after which HeartBeatLoggingInfo for Killed jobs are removed
Killed = -1
#Number of days after which HeartBeatLoggingInfo for Failed jobs are removed
Failed = -1
}
#Which production type jobs _not_ to remove, takes default from Operations/Transformations/DataProcessing
ProductionTypes =
}
#END
#BEGIN SiteDirector
SiteDirector
{
#VO treated (leave empty for auto-discovery)
VO =
#VO treated (leave empty for auto-discovery)
Community =
#the DN of the certificate proxy used to submit pilots. If not found here, what is in Operations/Pilot section of the CS will be used
PilotDN =
#List of sites that will be treated by this SiteDirector (No value can refer to any Site defined in the CS)
Site =
#List of CEs that will be treated by this SiteDirector (No value can refer to any CE defined in the CS)
CEs =
#List of CE types that will be treated by this SiteDirector (No value can refer to any type of CE defined in the CS)
CETypes =
#List of Tags that are required to be present in the CE/Queue definition
Tags =
#How many cycles to skip if queue is not working
FailedQueueCycleFactor = 10
#Every N cycles, pilot status update is performed by the SiteDirector
PilotStatusUpdateCycleFactor = 10
#Every N cycles, pilot submission is performed by the SiteDirector
PilotSubmissionCycleFactor = 1
#The maximum length of a queue (in seconds). Default: 3 days
MaxQueueLength = 259200
#Max number of pilots to submit per cycle
MaxPilotsToSubmit = 100
#Boolean value that indicates if the pilot job will send information for accounting
SendPilotAccounting = True
#Working directory containing the pilot files if not set in the CE
WorkDirectory =
}
#END
#BEGIN PushJobAgent
PushJobAgent
{
#VO treated (leave empty for auto-discovery)
VO =
#The DN of the certificate proxy used to submit pilots/jobs. If not found here, what is in Operations/Pilot section of the CS will be used
PilotDN =
#List of sites that will be treated by this PushJobAgent ("any" can refer to any Site defined in the CS)
Site =
#List of CE types that will be treated by this PushJobAgent ("any" can refer to any CE defined in the CS)
CETypes =
#List of CEs that will be treated by this PushJobAgent ("any" can refer to any CE type defined in the CS)
CEs =
#Max number of jobs to handle simultaneously
MaxJobsToSubmit = 100
#How many cycels to skip if queue is not working
FailedQueueCycleFactor = 10
#How the agent manages the submission of the jobs
SubmissionPolicy = JobWrapper
#The CVMFS location to be used for the job execution on the remote site
CVMFSLocation = "/cvmfs/dirac.egi.eu/dirac/pro"
#Clean the task after the job is done
CleanTask = True
}
#END
#BEGIN StatesAccountingAgent
StatesAccountingAgent
{
#the name of the message queue used for the failover
MessageQueue = dirac.wmshistory
#Polling time. For this agent it should always be 15 minutes.
PollingTime = 900
}
#END
#BEGIN TaskQueuesAgent
TaskQueuesAgent
{
PollingTime = 120
}
}
#END
Executors
{
Optimizers
{
Load = JobPath
Load += JobSanity
Load += InputData
Load += JobScheduling
}
JobPath
{
}
JobSanity
{
}
InputData
{
}
JobScheduling
{
}
}
#BEGIN JobWrapper
JobWrapper
{
BufferLimit = 10485760
CleanUpFlag = True
DefaultCatalog = []
DefaultCPUTime = 600
DefaultErrorFile = 'std.err'
DefaultOutputFile = 'std.out'
DiskSE = ['-disk'
DiskSE += '-DST'
DiskSE += '-USER']
MasterCatalogOnlyFlag = True
MaxJobPeekLines = 20
OutputSandboxLimit = 1024 * 1024 * 10
#Retry the upload of the output file if only one output SE is defined
RetryUpload = False
TapeSE = ['-tape'
TapeSE += '-RDST'
TapeSE += '-RAW']
MinOutputDataBufferGB = 5
}
}
}
Resources
{
#Section for identity providers, subsections is the names of the identity providers
#https://dirac.readthedocs.org/en/latest/AdministratorGuide/Resources/identityprovider.html
IdProviders
{
#EGI Checkin type:
EGI_Checkin
{
#What supported type of provider does it belong to
ProviderType = CheckIn
#Description of the client parameters registered on the identity provider side.
#Look here for information about client parameters description https://tools.ietf.org/html/rfc8414section-2
issuer = https://issuer
client_id = type_client_id_here_receved_after_client_registration
client_secret = type_client_secret_here_receved_after_client_registration
#Scopes that will be used by default
scope = openid
scope += profile
scope += offline_access
scope += eduperson_entitlement
scope += cert_entitlement
}
#WLCG IAM type:
WLCG_IAM
{
ProviderType = IAM
issuer = https://issuer
client_id = type_client_id_here_receved_after_client_registration
client_secret = type_client_secret_here_receved_after_client_registration
scope = openid
scope += profile
scope += offline_access
scope += eduperson_entitlement
scope += cert_entitlement
}
}
#Section for setting options for ComputingElements
Computing
{
#ComputingElement options can be set with different degrees of specialization:
#/Resources/Computing/CEDefaults* for all computing elements
#/Resources/Computing/<CEType>* for CEs of a given type, e.g., HTCondorCE or ARC
#/Resources/Sites/<grid>/<site>/CEs* for all CEs at a given site
#/Resources/Sites/<grid>/<site>/CEs/<CEName>* for a specific CE
#Values are overwritten by the most specialized option.
#Default local CE to use on all CEs (Pool, Singularity, InProcess, etc)
#There is no default value
DefaultLocalCEType = Singularity
#The options below can be valid for all computing element types
CEDefaults
{
#Will be added to the pilot configuration as /LocalSite/SharedArea
SharedArea = /cvmfs/lhcb.cern.ch/lib
#For adding Extra environments (only for pilots submitted by SiteDirectors)
UserEnvVariables = DIRACSYSCONFIG:::pilot.cfg
UserEnvVariables += RUCIO_HOME:::/home/dirac/rucio
#for adding some extra pilot options (only for pilots submitted by SiteDirectors)
ExtraPilotOptions = --pilotLogging True
#for adding some generic pilot options (only for pilots submitted by SiteDirectors)
#which will be transleted as "-o" options of the Pilot
GenericOptions = diracInstallOnly
GenericOptions += someThing
#for adding the --modules=value option to dirac-pilot
Modules =
#for adding the --pipInstallOptions=value to dirac-pilot
PipInstallOptions = --index-url https://lhcb-repository.web.cern.ch/repository/pypi/simple
#The upper limit for the NumberOfProcessors queue parameter set by the :mod:`~DIRAC.ConfigurationSystem.Agent.Bdii2CSAgent`
GLUE2ComputingShareMaxSlotsPerJob_limit = 8
}
Singularity
{
#The root image location for the container to use
#Default: /cvmfs/cernvm-prod.cern.ch/cvm4
ContainerRoot = /cvmfs/cernvm-prod.cern.ch/cvm4
#The binary to start the container
#default: singularity
ContainerBin = /opt/extras/bin/singularity
#List of directories to bind
ContainerBind = /etc/grid-security
ContainerBind += someDir:::BoundHere
#Extra options for starting the container
ContainerOptions = --cleanenv
#Flag for re-installing, or not, DIRAC in the container (default: True)
InstallDIRACInContainer = False
#If set to True container work area won't be deleted at end of job (default: False)
KeepWorkArea = True
}
ARC
{
}
#For the options for the ARC Computing Element see :mod:`~DIRAC.Resources.Computing.ARCComputingElement`
HTCondor
{
}
}
#For the options for the HTCondorCEs see :mod:`~DIRAC.Resources.Computing.HTCondorCEComputingElement`
#This section is used to define a compatibility matrix between dirac platforms (:ref:`admin_dirac-platform`) and OS versions.
OSCompatibility
{
#What's on the left is an example of a dirac platform as determined the dirac-platform script (:ref:`admin_dirac-platform`).
#This platform is declared to be compatible with a list of "OS" strings.
#These strings are identifying the architectures of computing elements.
#This list of strings can be constructed from the "Architecture" + "OS" fields
#that can be found in the CEs description in the CS (:ref:`cs-site`).
#This compatibility is, by default, used by the SiteDirector when deciding if to send a pilot or not to a certain CE:
#The SiteDirector matches "TaskQueues" to Computing Element capabilities
Linux_x86_64_glibc-2.17 = ...
}
#Section for proxy providers, subsections is the names of the proxy providers
#https://dirac.readthedocs.org/en/latest/AdministratorGuide/Resources/proxyprovider.html
ProxyProviders
{
#DIRACCA type:
MY_DIRACCA
{
#Main option, to show which proxy provider type you want to register.
ProviderType = DIRACCA
#The path to the CA certificate. This option is required.
CertFile = /opt/dirac/etc/grid-security/DIRACCA-EOSH/cert.pem
#The path to the CA key. This option is required.
KeyFile = /opt/dirac/etc/grid-security/DIRACCA-EOSH/key.pem
#The distinguished name fields that must contain the exact same contents as that field in the CA's
#DN. If this parameter is not specified, the default value will be a empty list.
Match = O
Match += OU
#The distinguished name fields list that must be present. If this parameter is not specified, the
#default value will be a "CN".
Supplied = C
Supplied += CN
#The distinguished name fields list that are allowed, but not required. If this parameter is not
#specified, the default value will be a "C, O, OU, emailAddress"
Optional = emailAddress
#Order of the distinguished name fields in a created user certificate. If this parameter is not
#specified, the default value will be a "C, O, OU, CN, emailAddress"
DNOrder = C
DNOrder += O
DNOrder += OU
DNOrder += emailAddress
DNOrder += CN
#To set default value for distinguished name field.
C = FR
O = DIRAC
OU = DIRAC TEST
#The path to the openssl configuration file. This is optional and not recomended to use.
#But if you choose to use this option, it is recommended to use a relatively simple configuration.
#All required parameters will be taken from the configuration file, except "DNOrder".
CAConfigFile = /opt/dirac/pro/etc/openssl_config_ca.cnf
}
#OAuth2 type:
MY_OAuth2
{
ProviderType = OAuth2
#Authorization server's issuer identifier URL
issuer = https://masterportal-pilot.aai.egi.eu/mp-oa2-server
#Identifier of OAuth client
client_id = myproxy:oa4mp
client_id += 2012:/client_id/aca7c8dfh439fewjb298fdb
#Secret key of OAuth client
client_secret = ISh-Q32bkXRf-HD2hdh93d-hd20DH2-wqedwiU@S22
#OAuth2 parameter specified in https://tools.ietf.org/html/rfc6749
prompt = consent
#Some specific parameter for specific proxy provider
max_proxylifetime = 864000
proxy_endpoint = https://masterportal-pilot.aai.egi.eu/mp-oa2-server/getproxy
}
}
#Where all your Catalogs are defined
FileCatalogs
{
#There is one section per catalog
#See http://dirac.readthedocs.io/en/latest/AdministratorGuide/Resources/Catalog/index.html
<MyCatalog>
{
CatalogType = <myCatalogType> # used for plugin selection
CatalogURL = <myCatalogURL> # used for DISET URL
}
}
#FTS endpoint definition http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/DataManagement/fts.html#fts-servers-definition
FTSEndpoints
{
FTS3
{
CERN-FTS3 = https://fts3.cern.ch:8446
}
}
#Abstract definition of storage elements, used to be inherited.
#see http://dirac.readthedocs.io/en/latest/AdministratorGuide/Resources/Storages/index.html#storageelementbases
StorageElementBases
{
#The base SE definition can contain all the options of a normal SE
#http://dirac.readthedocs.io/en/latest/AdministratorGuide/Resources/Storages/index.html#storageelements
CERN-EOS
{
BackendType = eos # backend type of storage element
SEType = T0D1 # Tape or Disk SE
UseCatalogURL = True # used the stored url or generate it (default False)
ReadAccess = True # Allowed for Read if no RSS enabled
WriteAccess = True # Allowed for Write if no RSS enabled
CheckAccess = True # Allowed for Check if no RSS enabled
RemoveAccess = True # Allowed for Remove if no RSS enabled
OccupancyLFN = /lhcb/storageDetails.json # Json containing occupancy details
SpaceReservation = LHCb-EOS # Space reservation name if any. Concept like SpaceToken
ArchiveTimeout = 84600 # Timeout for the FTS archiving
BringOnlineTimeout = 84600 # Timeout for the bring online operation used by FTS
WLCGTokenBasePath = /eos/lhcb # EXPERIMENTAL Path from which the token should be relative to
#Protocol section, see http://dirac.readthedocs.io/en/latest/AdministratorGuide/Resources/Storages/index.html#available-protocol-plugins
GFAL2_SRM2
{
Host = srm-eoslhcb.cern.ch
Port = 8443
PluginName = GFAL2_SRM2 # If different from the section name
Protocol = srm # primary protocol
Path = /eos/lhcb/grid/prod # base path
Access = remote
SpaceToken = LHCb-EOS
WSUrl = /srm/v2/server?SFN=
InputProtocols = file
InputProtocols += https
InputProtocols += root
InputProtocols += srm
InputProtocols += gsiftp # Allow to overwrite the list of protocols understood as input
OutputProtocols = file
OutputProtocols += https
OutputProtocols += root
OutputProtocols += srm
OutputProtocols += gsiftp # Allow to overwrite the list of protocols that can be generated
}
}
}
#http://dirac.readthedocs.io/en/latest/AdministratorGuide/Resources/Storages/index.html#storageelements
StorageElements
{
#Just inherit everything from CERN-EOS, without change
CERN-DST-EOS
{
BaseSE = CERN-EOS
}
#inherit from CERN-EOS
CERN-USER
{
BaseSE = CERN-EOS
#Modify the options for Gfal2
GFAL2_SRM2
{
Path = /eos/lhcb/grid/user
SpaceToken = LHCb_USER
}
#Add an extra protocol
GFAL2_XROOT
{
Host = eoslhcb.cern.ch
Port = 8443
Protocol = root
Path = /eos/lhcb/grid/user
Access = remote
SpaceToken = LHCb-EOS
WSUrl = /srm/v2/server?SFN=
}
}
CERN-ALIAS
{
Alias = CERN-USER # Use CERN-USER when instanciating CERN-ALIAS
}
}
#See http://dirac.readthedocs.io/en/latest/AdministratorGuide/Resources/Storages/index.html#storageelementgroups
StorageElementGroups
{
CERN-Storages = CERN-DST-EOS
CERN-Storages += CERN-USER
#Default SEs to be used when uploading output data from Payloads
SE-USER = CERN-USER
#Default SEs to be used as failover SEs uploading output data from Payloads.
#This option is used in the Job Wrapper and, if set, requires the RequestManagementSystem to be installed
Tier1-Failover = CERN-FAILOVER
Tier1-Failover += CNAF-FAILOVER
}
#Definition of the sites
#See http://dirac.readthedocs.io/en/latest/AdministratorGuide/Resources/site.html
Sites
{
LCG
{
#BEGIN SiteConfiguration
LCG.CERN.ch
{
#Local Storages
SE = CERN-RAW
SE += CERN-RDST
SE += CERN-USER # (Optional) SEs Local to the site
#Overwrite definities of StorageElement (discouraged)
#or StorageElementGroups for that Site
AssociatedSEs
{
#Tier1-Failover is now only CERN-FAILOVER when running a Job at CERN
Tier1-Failover = CERN-FAILOVER
}
Name = CERN-PROD # (Optional) Name of the site from the admin
Name += e.g in GOCDB
Coordinates = 06.0458:46.2325 # (Optional) Geographical coordinates
Mail = grid-cern-prod-admins@cern.ch # (Optional) Site Admin email
MoUTierLevel = 0 # (Optional) Tier level
Description = CERN European Organization for Nuclear Research # (Optional) ...
#Subsection to describe each CE available
CEs
{
#Subsection named as the CE fully qualified name
ce503.cern.ch
{
#(Optional) CE architecture
architecture = x86_64
#(Optional) CE operating system in a DIRAC format (purely for description)
OS = ScientificCERNSLC_Carbon_6.4
#(Optional) Boolean attributes that indicates if the site accept pilots (default: True)
Pilot = False
#Type of CE, can take any CE type DIRAC recognizes (:ref: `CE`)
CEType = HTCondorCE
#(Optional) Type of 'Inner' CE, normally empty. Default = "InProcess".
#Possibilities: potentially all CE types, but in practice
#the most valid would be: InProcess, Sudo, Singularity, Pool.
#Pool CE in turn uses InProcess (Default)
#or Sudo or Singularity. To specify, use Pool/ce_type.
#This option can also go at the Queue level.
LocalCEType = Pool
#(Optional) max number of processors that DIRAC pilots are allowed to exploit. Implicit default = 1
NumberOfProcessors = 12
#(Optional) Number of available worker nodes per allocation.
#Values can be a number (e.g. 2 nodes) or a range of values
#(e.g. from 2 to 4 nodes) which leaves the choice to the batch
#system.
NumberOfNodes = 2
#NumberOfNodes = 2-4
#(Optional) CE allows *whole node* jobs
WholeNode = True
#(Optional) List of tags specific for the CE
Tag = GPU
Tag += 96RAM
#(Optional) List of required tags that a job to be eligible must have
RequiredTag = GPU
RequiredTag += 96RAM
#Queues available for this VO in the CE
Queues
{
#Name of the queue
ce503.cern.ch-condor
{
#Name of the queue in the corresponding CE if not the same
#as the name of the queue section
#(should be avoided)
CEQueueName = pbs-grid
VO = lhcb
#CE CPU Scaling Reference
SI00 = 3100
#Maximum number of jobs in all statuses
MaxTotalJobs = 5000
#Maximum number of jobs in waiting status
MaxWaitingJobs = 200
#Maximum time allowed to jobs to run in the queue
maxCPUTime = 7776
#The URL where to find the outputs
OutputURL = gsiftp://locahost
#Overwrites NumberOfProcessors at the CE level
NumberOfProcessors = 12
#Overwrites NumberOfNodes at the CE level
NumberOfNodes = 12
#Overwrites WholeNode at the CE level
WholeNode = True
#Overwrites LocalCEType at the CE level
LocalCEType = Pool/Singularity
#List of tags specific for the Queue
Tag = MultiProcessor
#List of required tags that a job to be eligible must have
RequiredTag = GPU
RequiredTag += 96RAM
}
}
VO = lhcb
MaxRAM = 0
UseLocalSchedd = False
DaysToKeepLogs = 1
}
}
}
}
}
#END
#BEGIN CountriesConfiguration
Countries
{
#Configuration for ``pl`` sites
pl
{
#Redirect to ``de`` configuration
AssignedTo = de
}
de
{
AssociatedSEs
{
#Overwrite the Tier1-Failover StorageElementGroup
#For all German site which do not have a specific
#configuration (see See https://dirac.readthedocs.io/en/latest/AdministratorGuide/Resources/storage.html#mapping-storages-to-sites-and-countries)
Tier1-Failover = GRIDKA-FAILOVER
}
}
}
#END
#Configuration for logging backends
#https://dirac.readthedocs.io/en/latest/DeveloperGuide/AddingNewComponents/Utilities/gLogger/gLogger/Basics/index.html#backend-resources
LogBackends
{
#Configure the stdout backend
stdout
{
LogLevel = INFO
}
#Example for a log backend sending to message queue
#see https://dirac.readthedocs.io/en/latest/AdministratorGuide/ServerInstallations/centralizedLogging.html
mqLogs
{
MsgQueue = lhcb-mb.cern.ch::Queues::lhcb.dirac.logging
#Name of the plugin if not the section name
Plugin = messageQueue
}
}
}
Operations
{
MonitoringBackends
{
#This flag will globally enable Accounting and ES based monitoring of all types in DIRAC.
#`Accounting` is the default value, and `Monitoring` should be added if you wish to have both.
#If you want to override it and have a specific backend for a monitoring type, you should add a flag for it.
#For more info https://dirac.readthedocs.io/en/integration/AdministratorGuide/Systems/MonitoringSystem/index.html
Default = Accounting
}
#WMSHistory = Monitoring
#DataOperation = Accounting, Monitoring
#PilotSubmissionMonitoring = Accounting
#AgentMonitoring = ...
#ServiceMonitoring = ...
#RMSMonitoring = ...
#This is the default section of operations.
#Any value here can be overwriten in the setup specific section
Defaults
{
#Flag for globally disabling the use of the SecurityLogging service
#This is False by default, as should be migrated to use centralized logging
#(see https://dirac.readthedocs.io/en/latest/AdministratorGuide/ServerInstallations/centralizedLogging.html#logstash-and-elk-configurations)
EnableSecurityLogging = False
DataManagement
{
#see http://dirac.readthedocs.io/en/latest/AdministratorGuide/Resources/Catalog/index.html#multi-protocol
#for the next 4 options
AccessProtocols = srm
AccessProtocols += dips
RegistrationProtocols = srm
RegistrationProtocols += dips
StageProtocols = srm
ThirdPartyProtocols = srm
WriteProtocols = srm
WriteProtocols += dips
#FTS related options. See http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/DataManagement/fts.html
FTSPlacement
{
FTS3
{
ServerPolicy = Random # http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/DataManagement/fts.html#ftsserver-policy
#Plugin to alter default TPC selection list
FTS3Plugin = Default # http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/DataManagement/fts.html#fts3-plugins
}
}
#Matrix to define the multihop strategy.
#See http://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/DataManagement/fts3.html#multihop-support
MultiHopMatrixOfShame
{
#Used for any source which does not have a more specific rule
Default
{
#Default -> Default basically means "anything else than all the other defined routes"
Default = GlobalDefault
#Hop between "anything else" and IN3P3-DST
IN2P3-DST = DefaultToIN2P3-DST
#Hop between "anything else" and any SE inheriting from CNAF-Disk
CNAF-Disk = DefaultToCNAF-Disk
}
#Any transfer starting from CERN-RAW
CERN-RAW
{
#CERN-RAW -> anywhere else
Default = DefaultFromCERN-RAW
#Do not use multihop between CERN-RAW and SE inheriting from CERN-Disk
CERN-Disk = disabled
#CERN-RAW -> any SE inheriting from CNAF-Disk
CNAF-Disk = CERN-RAW-CNAF-Disk
#CERN-RAW->CNAF-DST (takes precedence over CERN-RAW -> CNAF-Disk)
CNAF-DST = CERN-RAW-CNAF-DST
#CERN-RAW -> IN2P3-DST
IN2P3-DST = disabled
}
}
}
#Specify how job access their data
#None of these fields is mandatory
#See https://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/WorkloadManagement/InputDataResolution.html
InputDataPolicy
{
#Default policy
Default = DIRAC.WorkloadManagementSystem.Client.InputDataByProtocol
#A job running at CERN would stream the data
LCG.CERN.cern = DIRAC.WorkloadManagementSystem.Client.InputDataByProtocol
#A job running at GRIDKA would download the files on the WN
LCG.GRIDKA.de = DIRAC.WorkloadManagementSystem.Client.DownloadInputData
#Shortcut for the JobAPI: job.setInputDataPolicy('Download')
Download = DIRAC.WorkloadManagementSystem.Client.DownloadInputData
#Shortcut for the JobAPI: job.setInputDataPolicy('Protocol')
Protocol = DIRAC.WorkloadManagementSystem.Client.InputDataByProtocol
#Used to limit or not the replicas considered by a Job in case of streaming
#See src/DIRAC/WorkloadManagementSystem/Client/InputDataByProtocol.py
AllReplicas = True
#List of protocols to use for streaming
Protocols
{
#This list is used if the we are getting a file from a
#StorageElement local to the site we are running on
Local = file
Local += xroot
Local += root
#This list is used if the SE is not local
Remote = xroot
Remote += root
}
#Module used for InputData resolution if not specified in the JDL
InputDataModule = DIRAC.Core.Utilities.InputDataResolution
}
Logging
{
#Default log backends and level applied to Services if
#it is not defined in the service specific section
DefaultServicesBackends = stdout
DefaultServicesLogLevel = INFO
#Similar options for agents
DefaultAgentsBackends = stdout
DefaultAgentsBackends += mqLogs
DefaultAgentsLogLevel = VERBOSE
#Default log level that is applied in last resort
DefaultLogLevel = DEBUG
}
#Options for the pilot3
#See https://dirac.readthedocs.io/en/latest/AdministratorGuide/Systems/WorkloadManagement/Pilots/Pilots3.html
Pilot
{
pilotRepo = https://github.com/DIRACGrid/Pilot.git # git repository of the pilot
pilotScriptsPath = Pilot # Path to the code
pilotScriptsPath += inside the Git repository
pilotRepoBranch = master # Branch to use
pilotVORepo = https://github.com/MyDIRAC/VOPilot.git # git repository of the pilot extension
pilotVOScriptsPath = VOPilot # Path to the code
pilotVOScriptsPath += inside the Git repository
pilotVORepoBranch = master # Branch to use
workDir = /tmp/pilot3Files # Local work directory on the masterCS for synchronisation
}
Services
{
#See http://dirac.readthedocs.io/en/latest/AdministratorGuide/Resources/Catalog/index.html
Catalogs
{
CatalogList = Catalog1
CatalogList += Catalog2
CatalogList += etc # List of catalogs defined in Resources to use
#Each catalog defined in Resources should also contain some runtime options here
<MyCatalog>
{
Status = Active # enable the catalog or not (default Active)
AccessType = Read-Write # No default
AccessType += must be set
Master = True # See http://dirac.readthedocs.io/en/latest/AdministratorGuide/Resources/Catalog/index.html#master-catalog
#Dynamic conditions to enable or not the catalog
#See http://dirac.readthedocs.io/en/latest/AdministratorGuide/Resources/Catalog/index.html#conditional-filecatalogs
Conditions
{
WRITE = <myWriteCondition>
READ = <myReadCondition>
ALL = <valid for all conditions>
<myMethod> = <myCondition valid only for myMethod>
}
}
}
}
}
#Options in this section will only be used when running with the
#<MySetup> setup
<MySetup>
{
}
}