Cloudflare Worker

DataDome Cloudflare integration detects and protects against bot activity.

This module is dedicated to be used on Cloudflare, utilizing the Workers feature:

Before the regular Cloudflare process starts, an event is triggered and processes the DataDome logic in a Workers function.
The module makes a call to the closest DataDome endpoint. Depending on the API response, the module either blocks the request or lets Cloudflare proceed with the regular process.

The module has been developed to protect the visitor' experience: if any errors were to occur during the process, or if the timeout is reached, the module will automatically disable its blocking process and allow the regular CloudFlare process to proceed.

How to install and configure

  1. Connect to your Cloudflare console and go to the Workers section
  2. Click on "Manage Workers"
  1. Click on "Create a Worker"
  1. Download our Cloudflare Module and paste the code from datadome.js in the Script Editor.
  1. Fill the server-side key variable value (DATADOME_LICENSE_KEY) with the server-side key from your dashboard

  2. Fill the client-side key variable value (DATADOME_JS_KEY) with the client-side key from your dashboard.

  3. Go back to the Workers section and Add Route for the domain you want to protect. Depending on your Cloudflare plan, dashboards may differ a bit.

  1. Click on the 'Save' button

Congrats! Your website is ready to be protected, at the Edge, against bot traffic!


Server-side keyYour DataDome server-side keyYes""
Client-side keyYour DataDome client-side keyOptional (but recommended)""
Client-side advanced optionsJSON object describing JStag option (see here for more documentation)Optional""
TimeoutThe request timeout for the DataDome API, in millisecondsOptional150
URL inclusion regexProcesses matching URIs onlyOptional""
URL exclusion regexIgnores all matching URIsOptionalexclude static asset
Static assets URI exclusion regexWill not send traffic associated with static assetsOptional""
IPs exclusion for server-side detectionWill not send traffic associated to these IPs to DataDomeOptional""
Client-side tag URLURL of the JS tag. Can be changed to include the tag as a first partyOptional
Client-side endpoint URLURL of the JS tag endpointOptional

Caching policy

DataDome module doesn't change the default caching policy.

However, the module adds a tracking cookie on all requests, which may impact some custom policies.

You can use the Worker TTL feature to force a specific caching TTL.
Feel free to contact our support for any specific needs.


Can I enable DataDome only for specified IP?

Yes, you can. You need to update the code at the beginning of the function handleRequest similarly to the below:

async function handleRequest(request) {
  try {

    if (request.headers.get('cf-connecting-ip') != "" && request.headers.get('cf-connecting-ip') != "2606:4700:30::681b:938f") {
        return await fetch(request);

    const url = new URL(request.url);

DataDome will only process requests incoming from IP or 2606:4700:30::681b:938f.

How do I get DataDome logs using Logpush?

You can use Workers Trace Events with Logpush to send logs to a destination supported by Logpush (Datadog, Splunk, S3 Bucket…).


Logpush is available to customers on Cloudflare’s Enterprise plan.


Logging from workers is a feature that is available for workers only using version 1.14 (or later) of our module. It’s not possible to get logs from our app.

This can be done only through Cloudflare’s API.

Gather your credentials

You will need
X-Auth-Email : the email address of your account

X-Auth-Key: the value of “Global API Key” from “My Profile” → “API tokens”

ACCOUNT_ID : the value of the account id seen in the “Overview” page

SERVICE_NAME: the name of the EXISTING service in which you set up DataDome

Update the worker’s script

Fill the DATADOME_LOG_VALUES value with the name of the values you want, as a Strings Array.
The possible values are:
X-DataDome-isbot, X-DataDome-botname, X-DataDome-captchapassed, X-DataDome-ruletype, X-DataDome-requestid, X-DataDome-matchedmodels, x-datadomeresponse, x-dd-type, X-DataDome-score and X-DataDome-Traffic-Rule-Response


var DATADOME_LOG_VALUES = ["X-DataDome-botname", "X-DataDome-isbot", "x-datadomeresponse"];

Create a Logpush trace worker

Trace Workers are used to send logs to Logpush.

The “service” field should be set to the name of either your Worker and its environment (the default is production).

curl -X POST \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <AUTH_KEY>" \
"<ACCOUNT_ID>/workers/traces" \
-d '{
"producer": {
"service": "<SERVICE_NAME>"
"type": "log_push"
}' | jq .

Make sure the traces were created:

curl -X GET \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <AUTH_KEY>" \
kers/traces" | jq .


This step has to be done every time the service is deployed.

Create a job to send data to your destination


Worker trace events is an ACCOUNT-scoped dataset
Make sure that all the API requests you make are on client/v4/accounts and not on client/v4/zones

To set up your destination properly, you can follow the documentation here.
Do not forget that all actions have to be made to client/v4/accounts and not to client/v4/zones.

curl -s<ACCOUNT_ID>/logpush/jobs
-X POST -d '
"name": "datadome-logs",
"logpull_options": "fields=Event,EventTimestampMs,Exceptions,Logs,ScriptName",
"destination_conf": "<YOUR_DESTINATION>",
"max_upload_bytes": 5000000,
"max_upload_records": 1000,
"dataset": "workers_trace_events",
"enabled": true
}' \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <API_KEY>"

Receive data

You will receive the logs at the destination you chose.
The output will look like


The information sent by DataDome is a Log Message, and is composed of the values set in DATADOME_LOG_VALUES, in the same order, separated by a semi-colon.

The - is set when the value is undefined.