API Reference

Async Jobs

Generate PDFs in the background and upload directly to your storage.

When to use async mode

Processing many PDFs without blocking
Your client has short request timeouts
Storing PDFs directly in your own S3/R2/GCS bucket
Batch document generation workflows

How It Works

1

Send request with async: true

Include upload_url (required) and optionally webhook_url.

2

Get job ID immediately

API returns a job_id and status_url without waiting for PDF generation.

3

PDF uploaded to your storage

When ready, the PDF is PUT to your presigned URL and webhook is sent (if configured).

Creating an Async Request

curl -X POST https://api.tailpdf.com/pdf \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{
    "content": "<div class=\"p-8\"><h1>Invoice #1234</h1></div>",
    "fonts": ["Inter:wght@400;600"],
    "async": true,
    "upload_url": "https://my-bucket.s3.amazonaws.com/pdfs/invoice-1234.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&...",
    "webhook_url": "https://api.example.com/webhooks/pdf-ready"
  }'

Response (202 Accepted)

{
  "job_id": "01JFXYZ123456789ABCDEFGH",
  "status": "pending",
  "status_url": "/jobs/01JFXYZ123456789ABCDEFGH"
}

Polling Job Status

GET /jobs/{'{'}job_id{'}'}

Job States

pending Job queued, waiting to start
processing PDF is being rendered
uploading PDF rendered, uploading to your URL
completed Successfully uploaded to your URL
failed An error occurred

Completed Response

{
  "job_id": "01JFXYZ123456789ABCDEFGH",
  "status": "completed",
  "created_at": "2024-12-15T10:30:00Z",
  "started_at": "2024-12-15T10:30:01Z",
  "completed_at": "2024-12-15T10:30:02Z",
  "result": {
    "file_size_bytes": 45678,
    "render_duration_ms": 850,
    "upload_duration_ms": 120
  }
}

Failed Response

{
  "job_id": "01JFXYZ123456789ABCDEFGH",
  "status": "failed",
  "created_at": "2024-12-15T10:30:00Z",
  "started_at": "2024-12-15T10:30:01Z",
  "completed_at": "2024-12-15T10:30:05Z",
  "error_message": "Upload failed: 403 Forbidden"
}

Job status is cached for 1 hour after completion or failure.

Webhooks

When you provide a webhook_url, TailPDF sends a POST request when the job completes or fails.

Webhook Headers

Header Value
Content-Type application/json
User-Agent TailPDF-Webhook/1.0
X-TailPDF-Event render.completed or render.failed
X-TailPDF-Signature sha256=... (if webhook secret configured)

Completed Webhook Payload

{
  "event": "render.completed",
  "job_id": "01JFXYZ123456789ABCDEFGH",
  "timestamp": "2024-12-15T10:30:02Z",
  "data": {
    "status": "completed",
    "file_size_bytes": 45678,
    "render_duration_ms": 850,
    "upload_duration_ms": 120
  }
}

Failed Webhook Payload

{
  "event": "render.failed",
  "job_id": "01JFXYZ123456789ABCDEFGH",
  "timestamp": "2024-12-15T10:30:05Z",
  "data": {
    "status": "failed",
    "error_message": "Upload failed: 403 Forbidden"
  }
}

Verifying Webhook Signatures

If you have a webhook secret configured in your dashboard, verify signatures to ensure webhooks are from TailPDF.

const crypto = require('crypto');

function verifyWebhook(payload, signature, secret) {
  const expected = 'sha256=' + crypto
    .createHmac('sha256', secret)
    .update(payload)
    .digest('hex');

  return crypto.timingSafeEqual(
    Buffer.from(signature),
    Buffer.from(expected)
  );
}

// In your webhook handler:
const isValid = verifyWebhook(
  req.rawBody,
  req.headers['x-tailpdf-signature'],
  process.env.TAILPDF_WEBHOOK_SECRET
);

Creating Presigned URLs

Generate a presigned URL from your storage provider to allow TailPDF to upload the PDF directly.

AWS S3

const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');
const { getSignedUrl } = require('@aws-sdk/s3-request-presigner');

const client = new S3Client({ region: 'us-east-1' });

const url = await getSignedUrl(client, new PutObjectCommand({
  Bucket: 'my-bucket',
  Key: 'pdfs/invoice-1234.pdf',
  ContentType: 'application/pdf',
}), { expiresIn: 3600 }); // 1 hour

Cloudflare R2

const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');
const { getSignedUrl } = require('@aws-sdk/s3-request-presigner');

const client = new S3Client({
  region: 'auto',
  endpoint: 'https://<account-id>.r2.cloudflarestorage.com',
  credentials: {
    accessKeyId: R2_ACCESS_KEY,
    secretAccessKey: R2_SECRET_KEY,
  },
});

const url = await getSignedUrl(client, new PutObjectCommand({
  Bucket: 'my-bucket',
  Key: 'pdfs/invoice-1234.pdf',
  ContentType: 'application/pdf',
}), { expiresIn: 3600 });

Google Cloud Storage

const { Storage } = require('@google-cloud/storage');

const storage = new Storage();
const [url] = await storage
  .bucket('my-bucket')
  .file('pdfs/invoice-1234.pdf')
  .getSignedUrl({
    version: 'v4',
    action: 'write',
    expires: Date.now() + 60 * 60 * 1000, // 1 hour
    contentType: 'application/pdf',
  });

Best Practices

Set URL expiration wisely

Use 1-hour expiration for presigned URLs. Jobs complete within minutes, but allow buffer for retries.

Use webhooks over polling

Webhooks are more efficient than polling. Only poll if webhooks aren't feasible.

Handle upload failures

Check for 403 errors indicating expired or invalid presigned URLs.

Store job IDs

Save job IDs in your database to track and retry failed jobs.