If we want to avoid being bogged down into byzantine terminology disputes, then we should classify things logically instead of exhaustively.
So let's start with what we mean by "traffic": this is a transfer of some data element between two or more parties. The "parties" are in different space-time positions (e.g. the two parties may be "you, now" and "you, next month", the traffic being then an encrypted file on your hard disk).
There are two main categories of security services at that point:
Confidentiality: outsiders must not learn some attributes of the data element. This category includes (at least) the following sub-categories:
Content confidentiality: making data unreadable for third parties that know that the message exist and are able to observe the encoding elements that convey it. This is where encryption is a useful tool. Note that data length is typically not well hidden by encryption, and can still reveal a lot.
Metadata: this is about countering traffic analysis and also maintaining privacy. Tor is here.
Existence: when the confidentiality feature that is being sought is to prevent outsiders from even noticing that a traffic is taking place, then this is called steganography.
Integrity: any alteration to the traffic shall be reliably detected by (at least) the parties that are supposed to receive the data. In this category, we will find the following:
Message authenticity: the receiver shall be able to ascertain that whatever it receives really is the genuine data. Note that this raises a question of definition: what makes some data "genuine" ? In particular, if the definition of "genuine" implies "being sent by a specific, named entity", then this category includes sender authentication. Conversely, if you take the example of some HTTPS Web server, the client is (at the SSL level) unauthenticated, but the SSL layer is still providing message authenticity with the following notion: the server does not know who it is talking to, but it knows that it was the same client all along the session.
Message authenticity can be further sub-divided based on who can verify it. Notably, when digital signatures are used, message authenticity can be verified by a party that does not otherwise have the power to create such messages. This opens the road to third-party validation and, ultimately, may help in achieving non-repudiation (that concept is more legal than mathematical, but for the part which is still in the world of computers, digital signatures are a powerful tool).
Traffic flow guarantees: integrity of individual messages is not enough; "traffic" in general consists in several messages sent at different positions in space-time. A receiver should, barring any attack, receive a given set of messages in a specific order (not necessarily a complete order); relevant to that category are replay attacks, dropped messages, reordered messages...
A sub-category includes attempts at surviving such alterations, rather than merely detecting them; this is the notion known as availability. See for instance this answer that discusses resistance of a country-sized network with regards to nuclear attacks.
The classification above is arbitrary, and other people have come up with other classifications. For instance, the "CIA triad" has been coined as "Confidentiality, Integrity and Authenticity" -- whereas my classification would put authenticity as a sub-case of integrity. Some other people have re-coined the "CIA" acronym as "Confidentiality, Integrity and Availability"; in my classification, availability is also a sub-case of integrity, albeit not the same sub-case.
Predictably enough, since some people were trying to educate crowds to the importance of the "CIA triad" (for any variant thereof), it has been one-upped, or, in that case, three-upped, into the Parkerian hexad that classifies information security into six categories: confidentiality, possession, integrity, authenticity, availability and utility.
Really down to the core, it all depends on what you call "data", "traffic" or "security".