This question touches on several issues relating to various things. I'll try and answer them in order, and then provide a bit more detailed explanation.
(Paraphrasing slightly):
A traceroute from A to B returns a path that is 10 hops long, with a round-trip latency of 300ms. It also shows ~10% packet loss at hop four. Under normal conditions, the average round-trip latency between A and B is between 10ms and 30ms.
Addressing these points in order:
- The number of hops in a path is pretty much irrelevant to effective throughput. What matters is the end-to-end latency, average packet loss, and the settings in the TCP stacks in A and B, particularly relating to TCP windowing. (More details below.)
- 10% packet loss at hop four in a traceroute is unlikely to be symptomatic of problems with the end-to-end connection. Many routers implement features such as control plane policing or ICMP rate limiting (particularly the generation of ICMP "TTL expired in transit" messages, which traceroute relies upon). The only reliable way to measure packet loss is to examine the counters in your TCP stack, or to capture packets from your actual data flow using tcpdump/Wireshark and examine the capture using a protocol analyser.
- It's very rare that a round trip latency to a given internet destination will change from 10-30ms to 300ms. Such changes would most likely be the result of a disastrous routing policy change within an ISP, and would likely be rectified as soon as possible. Perhaps the only case where I can see this occurring normally would be in a site which had a single physical (Ethernet, DSL etc) connection to their ISP with a satellite backup.
Regarding the impact of latency on download speed, many TCP implementations are configured to use a receive window size of 64kbytes. When you have a high latency connection between two hosts (more specifically a high bandwidth delay product, this window size can often limit your effective throughput, as TCP will stop transmitting buffered data until it starts receiving ACKs for already sent data from the far end.
EDIT: Depending on how you have pingplotter configured, it may not be providing you with an accurate representation of the loss on your connection. If pingplotter is using ICMP, it's possible that networks will drop/deprioritise this traffic in times of congestion, as it is not considered 'user traffic'. Also, any data about the loss at intermediate hops should be considered suspect, for the reasons mentioned above.
If possible, it would be interesting to have a packet capture running on your host (this can be done with Wireshark for example), and to look at the analysis within Wireshark relating to the actual TCP conversations that your applications are performing.