When a method's evaluation consumes substantial resources or time, it may be advisable to prevent multiple threads, processes, or machines from evaluating the same method with identical parameters concurrently. This can be achieved by instructing Metalama to employ a lock manager, abstracted by the ILockingStrategy interface.
Metalama provides two lock strategies: the default NullLockingStrategy, and LocalLockingStrategy.
Preventing concurrent execution in the current process
By default, the caching aspect permits concurrent execution of the same method with identical arguments.
The LocalLockingStrategy class implements a locking strategy that prevents methods running in the current process (or, to be exact, the AppDomain) from executing concurrently.
To configure the lock manager, the LockingStrategy property of the relevant CachingProfile must be set. Each caching profile needs to be configured separately.
To start using LocalLockingStrategy, navigate to the code that initialized the Metalama Caching by calling serviceCollection.AddMetalamaCaching or CachingService.Create. Supply a delegate that calls AddProfile and sets the LockingStrategy property.
Note
Each instance of the LocalLockingStrategy class maintains its own set of locks. However, it is irrelevant whether several profiles use the same or a different instance of the LocalLockingStrategy, as each method is associated with one and only one profile.
For instance, the following snippet activates LocalLockingStrategy for the Locking
logging profile:
16 // Add the caching service.
17 builder.Services.AddMetalamaCaching(
18 caching =>
19 caching.AddProfile(
20 new CachingProfile( "Locking" )
21 {
22 LockingStrategy = new LocalLockingStrategy()
23 } ) );
Example: locking vs non-locking caching access
The following example demonstrates two versions of a simulated ReadFile
method: one without cache locking, and the second with cache locking. The fake implementations ensure deterministic behavior.
The main program executes these methods twice in parallel and compares their results. When locking is enabled, both executions return exactly the same instance, indicating that the methods did not execute in parallel. This is precisely the purpose of cache locking.
1using Metalama.Patterns.Caching.Aspects;
2using System;
3using System.Threading;
4
5namespace Doc.Locking;
6
7public sealed class CloudService : IDisposable
8{
9 // We use barriers to make sure we wait long enough.
10 private readonly Barrier _withoutLockBarrier = new( 2 );
11
12 [Cache( ProfileName = "Locking" )]
13 public byte[] ReadFileWithLock( string path )
14 {
15 Console.WriteLine( "Doing some very hard work." );
16
17 Thread.Sleep( 50 );
18
19 return new byte[32];
20 }
21
22 [Cache]
23 public byte[] ReadFileWithoutLock( string path )
24 {
25 Console.WriteLine( "Doing some very hard work." );
26
27 // Simulate a long-running operation.
28 this._withoutLockBarrier.SignalAndWait();
29
30 return new byte[32];
31 }
32
33 public void Dispose() => this._withoutLockBarrier.Dispose();
34}
1using Metalama.Patterns.Caching;
2using Metalama.Patterns.Caching.Aspects;
3using Metalama.Patterns.Caching.Aspects.Helpers;
4using System;
5using System.Reflection;
6using System.Threading;
7
8namespace Doc.Locking;
9
10public sealed class CloudService : IDisposable
11{
12 // We use barriers to make sure we wait long enough.
13 private readonly Barrier _withoutLockBarrier = new(2);
14
15 [Cache(ProfileName = "Locking")]
16 public byte[] ReadFileWithLock(string path)
17 {
18 static object? Invoke(object? instance, object?[] args)
19 {
20 return ((CloudService)instance).ReadFileWithLock_Source((string)args[0]);
21 }
22
23 return _cachingService.GetFromCacheOrExecute<byte[]>(_cacheRegistration_ReadFileWithLock, this, new object[] { path }, Invoke);
24 }
25
26 private byte[] ReadFileWithLock_Source(string path)
27 {
28 Console.WriteLine("Doing some very hard work.");
29
30 Thread.Sleep(50);
31
32 return new byte[32];
33 }
34
35 [Cache]
36 public byte[] ReadFileWithoutLock(string path)
37 {
38 static object? Invoke(object? instance, object?[] args)
39 {
40 return ((CloudService)instance).ReadFileWithoutLock_Source((string)args[0]);
41 }
42
43 return _cachingService.GetFromCacheOrExecute<byte[]>(_cacheRegistration_ReadFileWithoutLock, this, new object[] { path }, Invoke);
44 }
45
46 private byte[] ReadFileWithoutLock_Source(string path)
47 {
48 Console.WriteLine("Doing some very hard work.");
49
50 // Simulate a long-running operation.
51 this._withoutLockBarrier.SignalAndWait();
52
53 return new byte[32];
54 }
55
56 public void Dispose() => this._withoutLockBarrier.Dispose();
57
58 private static readonly CachedMethodMetadata _cacheRegistration_ReadFileWithLock;
59 private static readonly CachedMethodMetadata _cacheRegistration_ReadFileWithoutLock;
60 private ICachingService _cachingService;
61
62 static CloudService()
63 {
64 _cacheRegistration_ReadFileWithLock = CachedMethodMetadata.Register(typeof(CloudService).GetMethod("ReadFileWithLock", BindingFlags.Public | BindingFlags.Instance, null, new[] { typeof(string) }, null).ThrowIfMissing("CloudService.ReadFileWithLock(string)"), new CachedMethodConfiguration() { AbsoluteExpiration = null, AutoReload = null, IgnoreThisParameter = null, Priority = null, ProfileName = "Locking", SlidingExpiration = null }, true);
65 _cacheRegistration_ReadFileWithoutLock = CachedMethodMetadata.Register(typeof(CloudService).GetMethod("ReadFileWithoutLock", BindingFlags.Public | BindingFlags.Instance, null, new[] { typeof(string) }, null).ThrowIfMissing("CloudService.ReadFileWithoutLock(string)"), new CachedMethodConfiguration() { AbsoluteExpiration = null, AutoReload = null, IgnoreThisParameter = null, Priority = null, ProfileName = (string?)null, SlidingExpiration = null }, true);
66 }
67
68 public CloudService(ICachingService? cachingService = null)
69 {
70 this._cachingService = cachingService ?? throw new System.ArgumentNullException(nameof(cachingService));
71 }
72}
1using System;
2using Metalama.Documentation.Helpers.ConsoleApp;
3using System.Threading.Tasks;
4
5namespace Doc.Locking;
6
7public sealed class ConsoleMain : IConsoleMain
8{
9 private readonly CloudService _cloudService;
10
11 public ConsoleMain( CloudService cloudService )
12 {
13 this._cloudService = cloudService;
14 }
15
16 public void Execute()
17 {
18 void ExecuteParallel( Func<byte[]> func )
19 {
20 var task1 = Task.Run( func );
21 var task2 = Task.Run( func );
22
23 Task.WaitAll( task1, task2 );
24
25 Console.WriteLine(
26 $"Returned same array: {ReferenceEquals( task1.Result, task2.Result )}" );
27 }
28
29 Console.WriteLine( "Without lock" );
30 ExecuteParallel( () => this._cloudService.ReadFileWithoutLock( "TheFile.txt" ) );
31
32 Console.WriteLine( "With locks" );
33 ExecuteParallel( () => this._cloudService.ReadFileWithLock( "TheFile.txt" ) );
34 }
35}
Without lock Doing some very hard work. Doing some very hard work. Returned same array: False With locks Doing some very hard work. Returned same array: True
1using Metalama.Documentation.Helpers.ConsoleApp;
2using Metalama.Patterns.Caching;
3using Metalama.Patterns.Caching.Building;
4using Metalama.Patterns.Caching.Locking;
5using Microsoft.Extensions.DependencyInjection;
6
7namespace Doc.Locking;
8
9internal static class Program
10{
11 public static void Main()
12 {
13 var builder = ConsoleApp.CreateBuilder();
14
15 //
16 // Add the caching service.
17 builder.Services.AddMetalamaCaching(
18 caching =>
19 caching.AddProfile(
20 new CachingProfile( "Locking" )
21 {
22 LockingStrategy = new LocalLockingStrategy()
23 } ) );
24 //
25
26 // Add other components as usual, then run the application.
27 builder.Services.AddConsoleMain<ConsoleMain>();
28 builder.Services.AddSingleton<CloudService>();
29
30 using var app = builder.Build();
31 app.Run();
32 }
33}
Handling lock timeouts
By default (unless the default NullLockingStrategy is used), the caching aspect will wait indefinitely for a lock. Suppose the thread evaluating the method becomes stuck (e.g., it is involved in a deadlock). Due to the locking mechanism, all threads evaluating the same method will also become stuck. To avoid this situation, a timeout behavior can be implemented.
Note
This section only covers the time taken to acquire a lock. It does not address the execution time of the method that has already acquired the lock.
Two properties of the CachingProfile class influence the timeout behavior:
AcquireLockTimeout determines the maximum time that the caching aspect will wait for the lock manager to acquire a lock. To specify an infinite waiting time, set this property to
TimeSpan.FromMilliseconds(-1)
. The default behavior is to wait indefinitely.OnLockTimeout is a delegate invoked when the caching aspect cannot acquire a lock due to a timeout. The default behavior is to throw a TimeoutException. To ignore the lock and proceed with the method implementation, replace this property with a delegate that does nothing.
Implementing a distributed lock manager
Implementing a distributed locking algorithm is a complex task, and we at Metalama have chosen not to go deeper into this area (just as we do not provide the implementation of a cache itself). However, Metalama does offer the ability to use any third-party implementation.
To make your lock manager work with the caching aspect, you should implement the ILockingStrategy and ILockHandle interfaces.