Parallel task-based programming models like OpenMP support the declaration of task data dependences. This information is used to delay the task execution until the task data is available. The dependences between tasks are calculated at runtime using shared graphs that are updated concurrently by all threads. However, only one thread can modify the task graph at a time to ensure correctness; others need to wait before doing their modifications. This waiting limits the application’s parallelism and becomes critical in many-core systems. This paper characterizes this behavior, analyzing how it hinders performance and presenting an alternative organization suitable for the runtimes of task-based programming models. This organization allows managing the runtime structures asynchronously or synchronously, adapting the runtime to reduce the waste of computation resources and increase theperformance. Results show that the new runtime structure outperforms the peak speedup of the original runtime model whencontention is huge and achieves similar or better performance results for real applications.