This project deploys instrumentation that allows shipping Cloudwatch logs to Logz.io, with a Firehose Delivery Stream.
This project will use a Cloudformation template to create a Stack that deploys:
- Firehose Delivery Stream with Logz.io as the stream's destination.
- Lambda function that adds Subscription Filters to Cloudwatch Log Groups, as defined by user's input.
- Roles, log groups, and other resources that are necessary for this instrumentation.
To deploy this project, click the button that matches the region you wish to deploy your Stack to:
Specify the stack details as per the table below, check the checkboxes and select Create stack.
Parameter | Description | Required/Default |
---|---|---|
logzioToken |
The token of the account you want to ship logs to. | Required |
logzioListener |
Listener host. | Required |
logzioType |
The log type you'll use with this Lambda. This can be a built-in log type, or a custom log type. | logzio_firehose |
services |
A comma-separated list of services you want to collect logs from. Supported services include: apigateway-websocket , apigateway-rest , rds , cloudhsm , codebuild , connect , elasticbeanstalk , ecs , eks , aws-glue , aws-iot , lambda , vpc , macie , amazon-mq , batch , athena , cloudfront , codepipeline , config , dms , emr , es , events , firehose , fsx , guardduty , inspector , kafka , kinesis , redshift , route53 , sagemaker , secretsmanager , sns , ssm , stepfunctions , transfer |
- |
customLogGroups |
A comma-separated list of custom log groups to collect logs from, or the ARN of the Secret parameter (explanation below) storing the log groups list if it exceeds 4096 characters. Note: You can also specify a prefix of the log group names by using a wildcard at the end (e.g., prefix* ). This will match all log groups that start with the specified prefix |
- |
useCustomLogGroupsFromSecret |
If you want to provide list of customLogGroups which exceeds 4096 characters, set to true and configure your customLogGroups as defined below. |
false |
triggerLambdaTimeout |
The amount of seconds that Lambda allows a function to run before stopping it, for the trigger function. | 300 |
triggerLambdaMemory |
Trigger function's allocated CPU proportional to the memory configured, in MB. | 512 |
triggerLambdaLogLevel |
Log level for the Lambda function. Can be one of: debug , info , warn , error , fatal , panic |
info |
httpEndpointDestinationIntervalInSeconds |
The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination | 60 |
httpEndpointDestinationSizeInMBs |
The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination | 5 |
filterPattern |
CloudWatch Logs filter pattern to filter the logs being sent to Logz.io. Leave empty to send all logs. For more information on the syntax, see Filter and Pattern Syntax or check the Filter Pattern Guide. | (empty string) |
AWS limits every log group to have up to 2 subscription filters. If your chosen log group already has 2 subscription filters, the trigger function won't be able to add another one.
If your customLogGroups
list exceeds the 4096 characters limit, follow the below steps:
- Open AWS Secret Manager
- Click
Store a new secret
- Choose
Other type of secret
- For
key
uselogzioCustomLogGroups
- In
value
store your comma-separated custom log groups list - Name your secret, for example as
LogzioCustomLogGroups
- Copy the new secret's ARN
- Choose
- In your stack, Set:
customLogGroups
to your secret ARN that you copied in step 2useCustomLogGroupsFromSecret
totrue
Give the stack a few minutes to be deployed.
Once new logs are added to your chosen log group, they will be sent to your Logz.io account.
If you've used the
services
field, you'll have to wait 6 minutes before creating new log groups for your chosen services. This is due to cold start and custom resource invocation, that can cause the Lambda to behave unexpectedly.
- 0.4.2:
- Refactor aws namespaces prefix
- 0.4.1:
- Avoid retry on
LimitExceededException
- Increase default timeout
60
->300
- Avoid retry on
- 0.4.0:
- Added support for subscription filter patterns using the
filterPattern
parameter - Added support for additional AWS services: athena, cloudfront, cloudwatch, codepipeline, config, dms, dynamodb, ec2, elasticache, elasticfilesystem, elasticloadbalancing, emr, es, events, firehose, fsx, guardduty, inspector, kafka, kinesis, kms, redshift, route53, s3, sagemaker, secretsmanager, sns, sqs, ssm, stepfunctions, transfer, waf, workspaces
- Added support for subscription filter patterns using the
- 0.3.3:
- Fix timing issue to make sure bucket is created before the delivery stream
- Fix issue where EventBridge trigger for log group creation was not created
- 0.3.2:
- Fix issue where EventBridge trigger for log group creation was not created when using only
customLogGroups
.
- Fix issue where EventBridge trigger for log group creation was not created when using only
- 0.3.1:
- Support deploying multiple stacks within the same AWS account
- Resolve bug with update mechanism
- 0.3.0:
- Support prefixes in
customLogGroups
via wildcard - Upgrade go
1.19
>>1.22
- Parallelized subscription filter updates to improve performance
- Support prefixes in
- 0.2.1: Add support for
aws-batch
service. - 0.2.0: Option to provide
customLogGroups
exceeding 4KB. - 0.1.0: Introduced the ability to directly update service and custom log parameters within the stack.
- 0.0.2: Fix for RDS service - look for prefix
/aws/rds/
- 0.0.1: Initial release.