CloudFormation let's you create templates for your stack configurations so I would suggest treating them as such. You probably don't want to have hardcoded values in your templates.
There are few ways to make names dynamic, one of them - don't specify one and AWS will generate a unique one for you.
Other - use !Sub
or !ImportValue
/!Ref
intrinsic functions to build up dynamic values. I.e.: TableName: !Sub "${AWS::StackName}-my-unique-content"
which will be always unique per stack, but also will contain some descriptive information about the contents inside.
If you have your Python lambdas inside the same stack, pass in the table name as an environmental variable (imo the easiest way)
PythonFunction:
Type: "AWS::Serverless::Function"
Properties:
Environment:
Variables:
TABLE_NAME: !Ref DynamoTableResource
If they are created in another stack, but in the same account - your best chance is to export the table name as an Output
, and then reference that by calling !ImportValue
function. For example:
dynamostack.yaml
Outputs:
DynamoDBResource:
Description: "DynamoDB table"
Value: !Ref DynamoTableResource
Export:
Name: !Sub "${AWS::StackName}-exported-dynamo-table-name"
And in another stack:
functionstack.yaml
PythonFunction:
Type: "AWS::Serverless::Function"
Properties:
Environment:
Variables:
TABLE_NAME: !ImportValue "dynamostack-exported-dynamo-table-name"
Be careful with inter-stack references though, you might end up with circular dependencies all over the place.
Another option is to utilise parameters in CloudFormation template and pass DynamoDB table names via them, and referencing them (by using !Ref
) in your lambda functions.