As I read it, your question boils down to: is it safe to use the same key for both symmetric encryption of some data, and a MAC on some other data ?
It is possible to build a (somewhat contrived) example where such usage is not safe. For instance, the symmetric encryption could be a custom stream cipher where the stream consists in HMAC values computed over successive value of a counter; in that case, the MAC using the same key could interact with encryption and leak information. However, with "normal" encryption algorithms and HMAC, risks are low. The same could not be said if the MAC was CBC-MAC: using the same key for encryption in CBC mode, and for CBC-MAC, is a deadly sin.
In general, it is best if each individual key serves a unique purpose. The generic method, here, is to have a "master key" K, and derive from that key, using a one-way Key Derivation Function, a key for encryption and another key for the MAC. In practice, this can be as simple as hashing K with SHA-256, and splitting the 256-bit output into two 128-bit halves; the first half is to be used for the encryption, the second half for the MAC. This is secure based on some partial one-wayness assumptions over SHA-256, assumptions which are quite reasonable.
SSL itself uses its own KDF, which is called, in the SSL specification, the "PRF".
If the secret key is somewhat weak, i.e. is derived from a password, then attackers observing the exchange may use what they saw to run an offline dictionary attack, i.e. try potential passwords; this is "offline" in the sense that attackers do that on their own machines and don't have to talk to the honest server for each try. Offline dictionary attacks are a problem. Running the protocol within SSL is a good way to thwart such eavesdroppers.
If there is no SSL, the same attackers could turn active and modify the encrypted data returned by the server. Thus, the encryption system should also include its own MAC. Combining MAC and encryption is known to be tricky, so modes which do the job properly (like GCM) are highly advisable.
More deviously, attackers could also make fake errors. If you send your GET over plain HTTP, an active attacker could intercept it and return a fake 404 response: since there is no encrypted data in a 404 (indeed, the point of the 404 is to state that there is no data at all), the client has nothing to verify, and cannot reject the 404 message as fake. Depending on what you use your protocol for, this may induce security issues. To protect against active attackers, all responses from the server should be authenticated, including negative responses (404). But, at that point, you are on the verge of reinventing SSL...
On a similar vein, an active attacker could return an old response from the server. Suppose that at some time, data block D1 was stored on the server, and could be obtained with a GET. Later on, the data block is updated with D2. However, the key has not changed, and the GET request from the client is still the same. An active attacker could substitute the D2 from the server by a copy of D1, as previously intercepted.
This can be avoided by using a nonce -- a client nonce, sent by the client; the server then must dynamically compute a MAC on what it returns, and the MAC input must include the client nonce. At that point:
- the protocol is no longer as lightweight as could be hoped for;
- this really looks like a homemade SSL.
Summary: it is not easy to outperform SSL. SSL is relatively complex, but that complexity is (mostly) intrinsic to the hardness of the problem at hand. A communication protocol which resists to impersonations, alterations and replay attacks, will have to include a number of finely tuned cryptographic elements, and will not be substantially simpler or cheaper to operate than SSL.
If you can arrange for your client and server to share a common high-entropy secret key, then TLS PSK cipher suites will give you good performance and about as little complexity as is possible (no certificate, no asymmetric crypto).