Sites

Site Names

Sites have names resulting from the concatenation of:

  • Domain: Grid site name, expressed in uppercase, for example: LCG, EELA

  • Site: Institution, for example: CPPM

  • Country: country where the site is located, expressed in lowercase, for example fr

The full DIRAC Site Name becomes of the form: [Domain].[Site].[co]. The full site names are used everywhere when the site resources are assigned to the context of a particular Domain: in the accounting, monitoring, configuration of the Operations parameters, etc.

Examples of valid site names are:

  • LCG.CERN.ch

  • CLOUD.IN2P3.fr

  • VAC.Manchester.uk

  • DIRAC.farm.cern

The [Domain] may imply a (set of) technologies used for exploiting the resources, even though this is not necessarily true. The use of these Domains is mostly for reporting purposes, and it is the responsibility of the administrator of the DIRAC installation to chose them in such a way that they are meaningful for the communities and for the computing resources served by the installation. In any case, DIRAC will always be a default Domain if nothing else is specified for a given resource.

The Domain, Site and the country must be unique alphanumeric strings, irrespective of case, with a possible use of the following characters: “_” “-“.

Configuration

Site configuration
  LCG.CERN.ch
  {
    # Local Storages
    SE = CERN-RAW, CERN-RDST, CERN-USER # (Optional) SEs Local to the site

    # Overwrite definities of StorageElement (discouraged)
    # or StorageElementGroups for that Site
    AssociatedSEs
    {
      # Tier1-Failover is now only CERN-FAILOVER when running a Job at CERN
      Tier1-Failover = CERN-FAILOVER
    }
    Name = CERN-PROD # (Optional) Name of the site from the admin, e.g in GOCDB
    Coordinates = 06.0458:46.2325 # (Optional) Geographical coordinates
    Mail = grid-cern-prod-admins@cern.ch # (Optional) Site Admin email
    MoUTierLevel = 0 # (Optional) Tier level
    Description = CERN European Organization for Nuclear Research # (Optional) ...
    # Subsection to describe each CE available
    CEs
    {
      # Subsection named as the CE fully qualified name
      ce503.cern.ch
      {

        # (Optional) CE architecture
        architecture = x86_64

        # (Optional) CE operating system in a DIRAC format (purely for description)
        OS = ScientificCERNSLC_Carbon_6.4

        # (Optional) Boolean attributes that indicates if the site accept pilots (default: True)
        Pilot = False

        # Type of CE, can take any CE type DIRAC recognizes (:ref: `CE`)
        CEType = HTCondorCE

        # (Optional) Type of 'Inner' CE, normally empty. Default = "InProcess".
        # Possibilities: potentially all CE types, but in practice
        # the most valid would be: InProcess, Sudo, Singularity, Pool.
        # Pool CE in turn uses InProcess (Default)
        # or Sudo or Singularity. To specify, use Pool/ce_type.
        # This option can also go at the Queue level.
        LocalCEType = Pool

        # (Optional) max number of processors that DIRAC pilots are allowed to exploit. Implicit default = 1
        NumberOfProcessors = 12

        # (Optional) Number of available worker nodes per allocation.
        # Values can be a number (e.g. 2 nodes) or a range of values
        # (e.g. from 2 to 4 nodes) which leaves the choice to the batch
        # system.
        NumberOfNodes = 2
        # NumberOfNodes = 2-4

        # (Optional) CE allows *whole node* jobs
        WholeNode = True

        # (Optional) List of tags specific for the CE
        Tag = GPU, 96RAM

        # (Optional) List of required tags that a job to be eligible must have
        RequiredTag = GPU,96RAM

        # Queues available for this VO in the CE
        Queues
        {
          # Name of the queue
          ce503.cern.ch-condor
          {
            # Name of the queue in the corresponding CE if not the same
            # as the name of the queue section
            # (should be avoided)
            CEQueueName = pbs-grid
            VO = lhcb

            # CE CPU Scaling Reference
            SI00 = 3100

            # Maximum number of jobs in all statuses
            MaxTotalJobs = 5000

            # Maximum number of jobs in waiting status
            MaxWaitingJobs = 200

            # Maximum time allowed to jobs to run in the queue
            maxCPUTime = 7776

            # The URL where to find the outputs
            OutputURL = gsiftp://locahost

            # Overwrites NumberOfProcessors at the CE level
            NumberOfProcessors = 12

            # Overwrites NumberOfNodes at the CE level
            NumberOfNodes = 12

            # Overwrites WholeNode at the CE level
            WholeNode = True

            # Overwrites LocalCEType at the CE level
            LocalCEType = Pool/Singularity

            # List of tags specific for the Queue
            Tag = MultiProcessor

            # List of required tags that a job to be eligible must have
            RequiredTag = GPU,96RAM
          }
        }
        VO = lhcb
        MaxRAM = 0
        UseLocalSchedd = False
        DaysToKeepLogs = 1
      }
    }
  }