Skip to content

Implement Lambda Python SDK payload compression for large input props #5384

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jun 16, 2025

Conversation

Copilot
Copy link
Contributor

@Copilot Copilot AI commented Jun 16, 2025

This PR implements S3 payload compression in the Lambda Python SDK to handle large input props, bringing feature parity with the JavaScript SDK.

Problem

Previously, the Python SDK would throw an error when input props exceeded AWS Lambda payload limits:

  • Video/audio renders: 200KB limit
  • Still renders: 5MB limit

The JavaScript SDK already had logic to automatically upload large payloads to S3 and pass them as {type: 'bucket-url', hash: string, bucketName: string} instead of {type: 'payload', payload: string}.

Solution

Implemented the same compression logic in the Python SDK:

Key Changes

  1. Added S3 compression functionality in remotionclient.py:

    • Automatic payload size detection with proper margins (194KB for video, 4.9MB for stills)
    • S3 upload for large payloads with SHA256 hash generation
    • Bucket management (auto-creation or use existing buckets)
    • Returns bucket-url format for large payloads, payload format for small ones
  2. Enhanced RemotionClient constructor with new optional parameters:

    • bucket_name: Specify custom S3 bucket for large payloads
    • force_path_style: Enable path-style S3 URLs
  3. Added comprehensive tests to verify:

    • Small payloads use payload format
    • Large payloads use bucket-url format
    • Hash generation consistency
    • Size limit logic
  4. Updated test client with large payload example

  5. Updated documentation to reflect the new capability available from v4.0.315

Example Usage

# Client automatically handles large payloads
client = RemotionClient(
    region="us-east-1",
    serve_url="https://example.com",
    function_name="remotion-render",
    bucket_name="my-bucket"  # Optional
)

# Large props are automatically compressed
large_props = {'bigData': ['x' * 1000] * 250}  # ~250KB
render_params = RenderMediaParams(
    composition="my-composition",
    input_props=large_props,  # Automatically uploaded to S3
)

response = client.render_media_on_lambda(render_params)

Backward Compatibility

  • All existing functionality remains unchanged
  • Small payloads continue to use the direct payload format
  • No breaking changes to the API

Testing

  • All existing tests pass
  • Added comprehensive unit tests for compression logic
  • Verified both small and large payload scenarios work correctly

Fixes #5383.


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copy link

vercel bot commented Jun 16, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
bugs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jun 16, 2025 0:35am
1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
remotion ⬜️ Skipped (Inspect) Jun 16, 2025 0:35am

@Copilot Copilot AI changed the title [WIP] Solve Lamba Python SDK payload limit issue Implement Lambda Python SDK payload compression for large input props Jun 16, 2025
@Copilot Copilot AI requested a review from JonnyBurger June 16, 2025 08:32
Copilot finished work on behalf of JonnyBurger June 16, 2025 08:32
@JonnyBurger
Copy link
Member

@copilot Instead of accepting a bucket name, get the first bucket that matches or create one if necessary.

Use the same logic as in the JS SDK (

export const internalGetOrCreateBucket = async <Provider extends CloudProvider>(
)

Copy link
Contributor Author

Copilot AI commented Jun 16, 2025

@copilot Instead of accepting a bucket name, get the first bucket that matches or create one if necessary.

Use the same logic as in the JS SDK (https://github.com/remotion-dev/remotion/blob/cbdd707...

Updated the Python SDK to use the same auto-discovery bucket logic as the JS SDK. Removed the bucket_name parameter and implemented automatic finding/creation of buckets with the remotionlambda- prefix, matching the behavior in get-or-create-bucket.ts. The implementation now follows the exact same pattern as the JS SDK including error messages and bucket naming conventions. (97e77c9)

@vercel vercel bot temporarily deployed to Preview – remotion June 16, 2025 12:35 Inactive
@JonnyBurger JonnyBurger marked this pull request as ready for review June 16, 2025 13:25
@JonnyBurger JonnyBurger merged commit 7319851 into main Jun 16, 2025
13 of 14 checks passed
@JonnyBurger JonnyBurger deleted the copilot/fix-5383 branch June 16, 2025 13:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Solve Lamba Python SDK payload limit issue
2 participants