Limiter

Encapsulate here the logic for limiting the matching of jobs

Utilities and classes here are used by the Matcher

class DIRAC.WorkloadManagementSystem.Client.Limiter.Limiter(jobDB=None, opsHelper=None, pilotRef=None)

Bases: object

__init__(jobDB=None, opsHelper=None, pilotRef=None)

Constructor

condCache = <DIRAC.Core.Utilities.DictCache.DictCache object>
csDictCache = <DIRAC.Core.Utilities.DictCache.DictCache object>
delayMem = {}
getNegativeCond()

Get negative condition for ALL sites

getNegativeCondForSite(siteName, gridCE=None)

Generate a negative query based on the limits set on the site

newCache = <DIRAC.WorkloadManagementSystem.Client.Limiter.TwoLevelCache object>
updateDelayCounters(siteName, jid)
class DIRAC.WorkloadManagementSystem.Client.Limiter.TwoLevelCache(soft_ttl: int, hard_ttl: int, *, max_workers: int = 10, max_items: int = 1000000)

Bases: object

A two-level caching system with soft and hard time-to-live (TTL) expiration.

This cache implements a two-tier caching mechanism to allow for background refresh of cached values. It uses a soft TTL for quick access and a hard TTL as a fallback, which helps in reducing latency and maintaining data freshness.

soft_cache

A cache with a shorter TTL for quick access.

Type:

TTLCache

hard_cache

A cache with a longer TTL as a fallback.

Type:

TTLCache

locks

Thread-safe locks for each cache key.

Type:

defaultdict

futures

Stores ongoing asynchronous population tasks.

Type:

dict

pool

Thread pool for executing cache population tasks.

Type:

ThreadPoolExecutor

Parameters:
  • soft_ttl (int) – Time-to-live in seconds for the soft cache.

  • hard_ttl (int) – Time-to-live in seconds for the hard cache.

  • max_workers (int) – Maximum number of workers in the thread pool.

  • max_items (int) – Maximum number of items in the cache.

Example

>>> cache = TwoLevelCache(soft_ttl=60, hard_ttl=300)
>>> def populate_func():
...     return "cached_value"
>>> value = cache.get("key", populate_func)

Note

The cache uses a ThreadPoolExecutor with a maximum of 10 workers to handle concurrent cache population requests.

__init__(soft_ttl: int, hard_ttl: int, *, max_workers: int = 10, max_items: int = 1000000)

Initialize the TwoLevelCache with specified TTLs.

get(key: str, populate_func: Callable[[], Any])

Retrieve a value from the cache, populating it if necessary.

This method first checks the soft cache for the key. If not found, it checks the hard cache while initiating a background refresh. If the key is not in either cache, it waits for the populate_func to complete and stores the result in both caches.

Locks are used to ensure there is never more than one concurrent population task for a given key.

Parameters:
  • key (str) – The cache key to retrieve or populate.

  • populate_func (Callable[[], Any]) – A function to call to populate the cache if the key is not found.

Returns:

The cached value associated with the key.

Return type:

Any

Note

This method is thread-safe and handles concurrent requests for the same key.