Skip to content
AI Detector

Detect AI-Generated Images

The Copyleaks AI Image Detection API is a powerful tool to determine if a given image was generated or partially generated by an AI. The API is synchronous, meaning you get the results in the same API call.

This guide will walk you through the process of submitting an image for AI detection and understanding the results.

  1. Before you start, ensure you have the following:

  2. Choose your preferred method for making API calls.

    You can interact with the API using any standard HTTP client.

    For a quicker setup, we provide a Postman collection. See our Postman guide for instructions.

  3. To perform a scan, we first need to generate an access token. For that, we will use the login endpoint. The API key can be found on the Copyleaks API Dashboard.

    Upon successful authentication, you will receive a token that must be attached to subsequent API calls via the Authorization: Bearer <TOKEN> header. This token remains valid for 48 hours.

    POST https://id.copyleaks.com/v3/account/login/api
    Headers
    Content-Type: application/json
    Body
    {
    "email": "[email protected]",
    "key": "00000000-0000-0000-0000-000000000000"
    }

    Response

    {
    "access_token": "<ACCESS_TOKEN>",
    ".issued": "2025-07-31T10:19:40.0690015Z",
    ".expires": "2025-08-02T10:19:40.0690016Z"
    }
  4. Use the AI Image Detector Endpoint to send an image for analysis. We suggest you to provide a unique scanId for each submission.

    • Minimum 512×512px, maximum 16 megapixels, less than 32MB
    • Supported formats: PNG, JPG, JPEG, BMP, WebP, HEIC/HEIF
    POST https://api.copyleaks.com/v1/ai-image-detector/my-image-scan-1/check
    Headers
    Authorization: Bearer <YOUR_AUTH_TOKEN>
    Content-Type: application/json
    Body
    {
    "base64": "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJ...",
    "filename": "test-image.png",
    "sandbox": true,
    "model": "ai-image-1-ultra"
    }
  5. The API response contains:

    • A summary object with the overall percentage of AI vs. human pixels
    • A result object with a Run-Length Encoded (RLE) mask
    • imageInfo with the image dimensions and metadata (when available)
    • scannedDocument with scan details including credits used

    Run-Length Encoding (RLE) is a compression method used to represent the AI-detected regions of the image efficiently. It provides an array of starts positions and lengths for each run of AI-detected pixels in a flattened 1D version of the image.

    You can decode this RLE data to create a binary mask. Below are implementations in different languages:

    def decode_mask(rle_data, image_width, image_height):
    """
    Decode RLE mask data into a binary mask array.
    Args:
    rle_data (dict): Dictionary with 'starts' and 'lengths' arrays
    image_width (int): Width of the image in pixels
    image_height (int): Height of the image in pixels
    Returns:
    list: A 1D array where 1 represents AI-detected pixels
    """
    total_pixels = image_width * image_height
    mask = [0] * total_pixels
    starts = rle_data.get('starts', [])
    lengths = rle_data.get('lengths', [])
    for i in range(len(starts)):
    start = starts[i]
    length = lengths[i]
    for j in range(length):
    position = start + j
    if position < total_pixels:
    mask[position] = 1
    return mask
    # Example usage:
    # result = response.json()
    # binary_mask = decode_mask(
    # result['result'],
    # result['imageInfo']['shape']['width'],
    # result['imageInfo']['shape']['height']
    # )

    The resulting binary mask is an array where a 1 (or true in Java) represents an AI-detected pixel. You can use this mask to create a visual overlay on the original image.

    After decoding the RLE data, you can use the resulting mask to draw a semi-transparent overlay on the original image. Here are some examples of how to achieve this:

    # Requires: pip install Pillow
    from PIL import Image
    import numpy as np
    def apply_overlay(image_path, mask_array, output_path):
    """
    Apply a red (1) and green (0) overlay to the image and save the result.
    Args:
    image_path (str): Path to the original image
    mask_array (np.ndarray): 2D numpy array with 1 (red) and 0 (green)
    output_path (str): Path to save the output image
    """
    height, width = mask_array.shape
    original_img = Image.open(image_path).convert('RGBA')
    overlay = Image.new('RGBA', (width, height), (0, 0, 0, 0))
    overlay_pixels = overlay.load()
    for y in range(height):
    for x in range(width):
    if mask_array[y, x] == 1:
    overlay_pixels[x, y] = (255, 0, 0, 120) # Red, semi-transparent
    else:
    overlay_pixels[x, y] = (0, 255, 0, 120) # Green, semi-transparent
    result_img = Image.alpha_composite(original_img, overlay)
    result_img.save(output_path)
    # Usage example:
    width = result['imageInfo']['shape']['width']
    height = result['imageInfo']['shape']['height']
    mask_array = np.array(binary_mask, dtype=np.uint8).reshape((height, width))
    apply_overlay('test-image.png', mask_array, 'output-with-overlay.png')

    For a complete breakdown of all fields in the response, see the AI Image Detection Response documentation.

  6. You have successfully submitted an image for AI detection. You can now use the JSON response in your application to take further action based on the findings.