Clockwork: Distributed, Scalable Job Scheduler


Clockwork-Scheduler is a general-purpose distributed job scheduler. It offers you horizontally scalable scheduler with at least once delivery guarantees. It is fault-tolerant, persistent, easy to deploy and maintain.


General-purpose scheduling by external services.

Clients can create Schedule at a specified time in the future, at scheduled time clockwork executes the schedule and delivers taskData to provided kafka topic. Right now only supported mode of delivery is kafka topic but plan to add more such as web hooks.

Existing Solutions

Architecture Approach and Challenges:

We started with a simple architecture of three components.


	"clientId": "client1",
	"scheduleKey": "order1-create",
	"taskData": "payload",
	"delivery": {
        "topic": "orderTopic"
	"scheduleTime": 1553629195570

web api receives REST request to create schedules and puts it into the database. In the background an execution worker polling (scheduleTime < currentTime) on database, to find schedules to execute.

This approach has a major drawback, it won’t scale beyond one execution worker as all workers would try to execute the same schedules and there would be no performance gain.

Partition the Database

To solve this we can Partition or shard the database such that different execution workers can query on separate partitions and execute schedules in parallel.

We could use partitioned Postgres or MySQL but managing (adding new shards, backups, etc) such setup is challenging and we don’t need multi-document ACID guarantees. We could use distributed databases such as DynamoDB or Cassandra as which has partition support by default and maintaining their clusters is easier. We opted for DynamoDB as it is a fully managed service.

DynamoDB Basics

If you are already familiar with DynamoDB please skip this section. Each Document in DynamoDB must have a partition Key or hash key which determines which partition this document will go to, In addition to that, it can optionally define sort key which determines in what orders records in partitions, All data in one partition is sorted by provided sort key.

To find any document in Dynamodb we must provide the partition Key the document.

To learn more please check

Polling Query

partitionKey: ?
sortKey: scheduleKey
query: scheduleKey < currentTime

Choosing Partition Key

Coordinating Executor workers

Now that we know all partitionKeys, but we still need to figure out

Solution: Kafka

We are in luck, All the above tasks are performed by kafka internally to manage it’s consumers, and as we are already using kafka for delivery we can leverage kafka to perform all those tasks.

Brief About Kafka

Kafka is a distributed log or event store which can also be used as pub-sub and handle huge amount of traffic.

Kafka has topics and topics has partitions. Kafka uses consistent hashing to uniformly distribute load across all its partitions. Kafka uses consumer groups to assign consumers and reassign partitions to the consumer in case of consumer failure.

Leveraging Kafka

We can use kafka partitioning (consistent hashing) to generate partitionIds and consumer assignment to assign executor.

Every node of clockwork has kafka create consumers and schedule execution workers.

Both workers are in sync using kafka.

eg. if create worker is consuming form some partition lets say 3 then execution worker on the node will be polling on the same dynamo partition (3).

If kafka partition rebalances and same create worker is now listening to partition 1 then the execution worker will also be polling partition 1 of dynamo to execute jobs.

To scale out we can increase number of partition in kafka topic and add more clockwork nodes. Kafka will automatically assign new partitions to create workers execution workers on nodes will start polling new partitions.

Find the project on github

Follow the discussion on Hacker News