From 2eaf2b9ffabdbf17090a0ce2dfad2f3a1cdfccac Mon Sep 17 00:00:00 2001 From: taupirho Date: Thu, 21 Jun 2018 09:30:56 +0100 Subject: [PATCH] Update README.md --- README.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/README.md b/README.md index 16f1335..9a1a2f1 100644 --- a/README.md +++ b/README.md @@ -17,10 +17,7 @@ After the stream was created it was just a matter of creating two python lambdas and I have put plenty of comments in so won't discuss them further here. A slightly unusual feature is that neither lambda is triggered by an event - although they can and usually will be. They are stand-alone and can be run manually as and when required or more likely as part of an AWS Step function process __(see my article on using step functions [here](https://github.com/taupirho/using-aws-step))__. -I haven't included any error/retry processing in my examples but in production you obviously would include this. Also for asynchronous -i.e event based - running you would set up DLQ's for the reading/writing processes to send failed messages to - either to an -SNS topic or to SQS for futher investigation and/or processing. Keep an eye on the DeadLetterErrors cloudwatch metric though as writes -to DLQ's can fail too! The only other thing to note is that the lambdas obviously need permission to read and write to kinesis. I took the + The only other thing to note is that the lambdas obviously need permission to read and write to kinesis. I took the easy option and extended the default lambda-execution-role to allow all access to kinesis but again in a production system you would want to nail this down to very specific permissions.