This question shows a misunderstanding of the purpose of the server_name declaration. It is common amongst servers to be able to specify the IP address of the interfaces that you wish to create the listening socket. However, with NGINX, that is not achieved using the server_name directive.
To understand the server_name directive, you need to understand part of the HTTP protocol. When your web browser makes a HTTP request to a website, for example the stack overflow https://serverfault.com/questions triggers your web browser to look up serverfault.com in DNS to obtain an IP address - This is why you need a DNS entry.
Once your web browser has the IP address, it opens a TCP connection to port 80 on that IP address. The web browser then needs to tell the webserver which page is being requested. It does this by using the HTTP protocol. The first line of the request will be:
GET /questions HTTP/1.1
This tells the server the URL path, and the HTTP version that the client is using to communicate.
The second line is:
Host: serverfault.com
This is called a HTTP header, and the Host header tells the webserver which website needs to handle the request. When a server only hosts a single website, this probably feels overkill. But often, webservers handle many websites.
This matches up to the nginx directive server_name. This directive tells nginx to only use this nginx site when the Host header matches this string.
So, to answer your question, it does not make sense to set an IP address in the server_name directive. It would be more normal to register a domain name, and to then create a CNAME DNS entry to refer to the AWS servers DNS name. When you spin up a replacement server, update the CNAME to point to the new servers DNS name.
A professional website would be placed behind an elastic load balancer, which then maintains a constant DNS name for you to reference with your DNS entry.