Skip to content
AI Detector

Detect AI-Generated Images

The Copyleaks AI Image Detection API is a powerful tool to determine if a given image was generated or partially generated by an AI. The API is synchronous, meaning you get the results in the same API call.

This guide will walk you through the process of submitting an image for AI detection and understanding the results.

  1. Before you start, ensure you have the following:

  2. Choose your preferred method for making API calls.

    You can interact with the API using any standard HTTP client.

    For a quicker setup, we provide a Postman collection. See our Postman guide for instructions.

  3. To perform a scan, we first need to generate an access token. For that, we will use the login endpoint. The API key can be found on the Copyleaks API Dashboard.

    Upon successful authentication, you will receive a token that must be attached to subsequent API calls via the Authorization: Bearer <TOKEN> header. This token remains valid for 48 hours.

    POST https://id.copyleaks.com/v3/account/login/api
    Headers
    Content-Type: application/json
    Body
    {
    "email": "[email protected]",
    "key": "00000000-0000-0000-0000-000000000000"
    }

    Response

    {
    "access_token": "<ACCESS_TOKEN>",
    ".issued": "2025-07-31T10:19:40.0690015Z",
    ".expires": "2025-08-02T10:19:40.0690016Z"
    }
  4. Use the AI Image Detector Endpoint to send an image for analysis. We suggest you to provide a unique scanId for each submission.

    POST https://api.copyleaks.com/v1/ai-image-detector/my-image-scan-1/check
    Headers
    Authorization: Bearer <YOUR_AUTH_TOKEN>
    Content-Type: application/json
    Body
    {
    "base64": "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJ...",
    "filename": "test-image.png",
    "sandbox": true,
    "model": "ai-image-1-ultra"
    }
  5. The API response contains a summary object with the overall percentage of AI vs. human pixels, and a result object with a Run-Length Encoded (RLE) mask.

    Run-Length Encoding (RLE) is a compression method used to represent the AI-detected regions of the image efficiently. It provides an array of starts positions and lengths for each run of AI-detected pixels in a flattened 1D version of the image.

    You can decode this RLE data to create a binary mask. Here’s a helper function to do so:

    function decodeMask(rleData, imageWidth, imageHeight) {
    const totalPixels = imageWidth * imageHeight;
    const mask = new Array(totalPixels).fill(0);
    const starts = rleData.starts || [];
    const lengths = rleData.lengths || [];
    for (let i = 0; i < starts.length; i++) {
    const start = starts[i];
    const length = lengths[i];
    for (let j = 0; j < length; j++) {
    const position = start + j;
    if (position < totalPixels) {
    mask[position] = 1;
    }
    }
    }
    return mask;
    }
    // Example usage:
    // const { result, imageInfo } = await response.json();
    // const binaryMask = decodeMask(result, imageInfo.shape.width, imageInfo.shape.height);

    The resulting binaryMask is a 1D array where a 1 represents an AI-detected pixel. You can use this mask to create a visual overlay on the original image.

    After decoding the RLE data, you can use the resulting mask to draw a semi-transparent overlay on the original image. Here are some examples of how to achieve this:

    # Requires: pip install Pillow
    from PIL import Image
    import io
    def apply_overlay(image_bytes, rle_mask, width, height):
    """Applies a red overlay to an image based on an RLE mask."""
    img = Image.open(io.BytesIO(image_bytes)).convert("RGBA")
    overlay = Image.new('RGBA', img.size, (255, 255, 255, 0))
    draw = Image.Draw.Draw(overlay)
    starts = rle_mask.get('starts', [])
    lengths = rle_mask.get('lengths', [])
    for i in range(len(starts)):
    start_pixel = starts[i]
    run_length = lengths[i]
    for p in range(run_length):
    pixel_index = start_pixel + p
    x = pixel_index % width
    y = pixel_index // width
    if x < width and y < height:
    draw.point((x, y), fill=(255, 0, 0, 128))
    img = Image.alpha_composite(img, overlay)
    byte_arr = io.BytesIO()
    img.save(byte_arr, format='PNG')
    return byte_arr.getvalue()

    For a complete breakdown of all fields in the response, see the AI Image Detection Response documentation.

  6. You have successfully submitted an image for AI detection. You can now use the JSON response in your application to take further action based on the findings.