0

I have a web app that communicates with a backend server, and the users of the web app are organisations that each have a single login for the entire organisation. The app is meant to be used for example on TV's in the cafeteria to display todays activities and such. However, it also has an "admin" part to it, where the admin can change settings, edit news texts, etc.

My concern is that any of the admins in one of the organisations who know the login details of that organisation might login on a computer, open up Chrome Devtools and start looking at the network traffic to/from the server.

I intend to encrypt the data, so it won't be viewable in plain text. However, another concern is that the admin could save the encrypted data and try to send it to the server at a later time, potentially causing "interesting" results in the cafeteria... for example showing a news text from last year or other pranks ;)

So I got to thinking that it might be a good idea to "expire" the data being sent from the client to the server. One way to do it would be using a separate signature/HMAC based on a timestamp (using for example 30 second intervals to account for minor time delays between client/server). Another, simpler, way to do it would be to include the current timestamp with any data being sent before encrypting it. That way, when the server decrypts the data it can just check to make sure the timestamp is within a reasonable time form the current time.

Example

// Original data
data = '{"bannertext": "Remember that tomorrow is a public holiday, so no work!"}';

// Prepend timestamp
expiringData = '#ts.'+getCurrentTimestamp()+'#' + data;

// Encrypt and send the data
http.post( encrypt(expiringData) );

Since the data and timestamp are encrypted, any user looking at it in Chrome Devtools won't know what it contains. If they save the contents of the request and then try to send it later to trick everybody that there is a holiday tomorrow, it won't work because the server will see that the timestamp is old.

I am wondering if this is a commonly used technique, and if so does it have a specific name? When searching I mostly find stuff about TOTP (Time-based One Time Passwords) and HMAC-signatures with a time component, but not exactly including the timestamp with the data to be encrypted.

Magnus
  • 213
  • 1
  • 5
  • 2
    I don't understand the problem. If I correctly understand you the admin knows the password which is required to do everything with the web app. What could the admin achieve by analyzing the traffic which can not already be achieved by using the admin password? – Steffen Ullrich Sep 09 '22 at 19:12
  • Leaving aside the question of why somebody with the admin creds couldn't just pull the prank by sending the same "original data" message at a later date... how are you going to encrypt the data such that the admin can't see the key? If you encrypt it at the client - as you presumably must - then the user of the client (the admin) has total control over the process and can extract the key easily. This is after all how TLS already works (you ARE using HTTPS right?), which incidentally provides replay protection already. Also, you would need signing (integrity), not encryption (confidentiality). – CBHacking Sep 10 '22 at 01:21

1 Answers1

1

Expanding comment to a proper answer...

The general concept of adding a timestamp or other nonce (number used once) - typically a monotonically increasing one - to data before cryptographically protecting it as a form of replay protection is very common. This is, for example, used on JWTs (JSON Web Tokens, a common form of session or access token), although in that case there's a window of validity rather than truly being single-use. X.509 certificates - the things that make the Public Key Infrastructure and HTTPS work securely - also do this. AWS SigV4 request signing scheme includes the timestamp in the signed data to enforce expiry. There are many other examples.

However...

  • This protects messages against replay outside of a time window, but doesn't fully prevent replay in general. To do that, the client needs to guarantee that every message has a unique nonce, and the server needs to remember all of them (or, if the nonce is monotonically increasing, the server only needs to remember the latest nonce and reject any request with a lesser-or-equal one).
  • When one machine is generating the timestamp and another is consuming it, clock skew becomes a problem. Even leaving aside network latency, it's reasonably common for two machines' clocks to be at least a few seconds apart, and sometimes much more.
  • To protect the timestamp/nonce, the value must be signed or authenticated (which is not the same thing as encrypted) or otherwise integrity-protected. Encryption by itself doesn't actually prevent me from changing the ciphertext (encrypted data) in a way that controllably and undetectably changes the decrypted plaintext. For example, if I know the timestamp is a Unix time value starting four bytes into the message, with many ciphers I can easily flip a bit or two to make the timestamp appear to be well in the future, or - if I know the exact original value (which I might, since I know when it was sent) - even precisely change it to any other value. There are some cipher constructions that add integrity as well as confidentiality, but by default, encryption doesn't prevent tampering. None of the schemes mentioned above involve encrypting the timestamp, and indeed x.509 certs specifically need to be in plain text because the whole point is that everybody can read them.

With that all said, your idea has several much bigger problems!

  1. Do you trust the admins or not? Per the problem specification, your admins can send arbitrary "news" to the server anyhow... but you're concerned about them sending outdated messages? That doesn't make sense. They don't need to replay an old message, just log in and send it again.
  2. You talk about encrypting the data before sending it. Encrypting it where, and with what key? Presumably you don't just mean TLS (the protocol that secures HTTPS) - though you absolutely do need to be using TLS - because that doesn't obscure the data in the browser dev tools. But if you don't mean TLS, you mean you're going to add your own encryption (or even signing!) before the message is sent. The problem is, the only party who can perform that encryption is... the client! The very one that your (supposedly) malicious administrator is using! You can use asymmetric encryption to encrypt the data without giving the ability to decrypt it, but you can't encrypt the data before the client sees it because the client is the one doing the encryption. You can encrypt it before the network tab shows the message, but you can't encrypt it before the JS debugger sees the message - the plain text will be hanging out right there in a JS variable!
  3. Also, you can't prevent an attacker from collecting the encryption key for later use, except by periodically expiring and rotating that key. Again, the client needs to know the key in order to use it for encryption. The attacker could extract that key and use it in the future to generate arbitrary messages.
  4. You don't have authentication! If all the admins of an organization are expected to use the same admin login, then those credentials can't be used for authentication - you don't know who is using them - and they only serve as authorization to send admin requests. Obviously, authorization is important, but the lack of authentication is extremely bad. For example, suppose one of the admins does use their access to prank people. Who did it? No way to know! A well-designed system would have an audit log that records every user who logs in and every privileged action taken, but in this case, because you can't distinguish one admin from another, there's no way to know.
  5. Also, you're requiring shared credentials for multi-admin orgs. This is a major pain from a security perspective, because sometimes peoples' access needs to be revoked (if they left the company, changed teams, were found to be abusing it, etc.) but there's no way to make sure somebody forgets a password. Instead, you have to rotate the admin password for every admin every time you remove somebody's access. That's a huge pain in the butt for larger orgs, and in practice, it won't happen. So the very problem you're worried about - which wouldn't be an issue in a well-designed system, and which your proposed solution doesn't fix for several reasons - would end up coming to pass (albeit in a different way than you thought): somebody who shouldn't be able to make admin requests (e.g. to modify the "news") almost certainly would be able to, possibly from outside the company entirely, because rotating the password was too much hassle or somebody just forgot one time.

The actual solutions to your problem:

  • Real authentication. Every user has a unique account that only they can use. Ideally even the TVs don't share credentials; presumably their access is read-only, but ideally they should have unique creds too so you can decommission one without worrying that it has a stored cred that can be used to access internal company messages, or needing to rotate the creds on every other TV at the same time. If you want really good security, add multi-factor authentication for admins. If you want to make corporate customers happy, add support for single sign-on (SSO) via at least one of Kerberos (Active Directory, Open Directory, etc.), OAuth/OIDC (Google Suite, Office 365, etc.), and/or SAML (Okta, Salesforce, lots of other "enterprise" user management solutions). That way, corporate admins can add or remove users, or edit their permissions, from a central location; this makes onboarding easy, and makes offboarding easy to do right without requiring remembering any extra steps.
  • Audit logging. Every time a user does something - log in, access privileged data, make a change, etc. - create a log entry stating who did what, and both when it happened and where the request came from (IP address, usually). The log should be append-only (no way for an admin to modify, delete, or overwrite entries). It is critical to log whoever did (or even attempted but failed to do) something, but you shouldn't log the credentials they used. You shouldn't even log incorrect credentials (like if somebody tries to log in and fails, it's a good idea to log that but you must not log the wrong credential they entered; it might just by a typo that somebody could guess the correct value from after seeing the log entry).
  • Don't trust the client! Never send any data to the client - whether or not it's visible in the web page, or even in the network tools - that you don't want the user to know. Never trust any security claim (such as when a message was generated or how much access a user has) from the client unless it's also authenticated (typically meaning signed or MAC'd) using a key that only the server has, and the message authenticity has been verified. If you revoke access, destroy the ability to use that user/session on the server side, rather than just requesting the client to forget the token and then trusting that nobody has a copy anymore (this in particular is tricky for JWTs, since they can't be revoked - one major point is that there's no server-side state so there's no list of valid or invalid ones to check against - so if you use JWTs, make their lifetimes very short).
  • Use TLS (HTTPS) for everything, and don't try to reinvent the wheel. TLS provides server authentication, message confidentiality (encryption), message integrity, and replay protection by default. It requires very little extra code to use (you do need to get a certificate, but there's services that provide those for free). You can set non-HTTPS requests to automatically redirect to HTTPS or even not listen on plain HTTP at all, and can set browsers to automatically re-write HTTP requests as HTTPS requests before even sending them (HTTP Strict Transport Security; protects against "SSL stripping" attacks). No, it doesn't prevent the client from seeing what messages they send and receive in the browser tools, but that is fundamentally impossible to do securely and wouldn't provide any meaningful additional security anyhow. TLS (like all cryptography) is solving a very hard problem. Many people who are way better at cryptography than I am have have spent decades probing its security, poking at weaknesses, and offering fixes that get scrutinized in turn. You are not going to do as well, much less any better, on your own. There's a saying in cryptography: never roll your own.
CBHacking
  • 40,303
  • 3
  • 74
  • 98