Yes (and no, it is not internet speed, and it is not speed per se).
Speed
Speed is a very unprecise wording which intermingles two different things that are widely independent but interact with each other: latency and bandwidth.
Also, the speed that you observe is not internet speed. It is a very complex mixture of many things that happen on your end (your computer), on the other end (server) and on several points in between. Which may be a totally different thing with the next server that you access, even if that one is just as far away (or farther).
Bandwidth
Bandwidth is the amount of data you can -- in theory -- push onto the wire per unit of time. There are usually hard and soft limits for that. The hard limit would be what the line is able to take, and then there's what you pay for and what the provider will allow you (usually less!). Often, transfers are not uniform, they start faster and then throttle down very soon.
For example, I have a 96Mbit/s uplink with a physical line capacity of 112Mbit/s. That is because for enhanced stability, less of the bandwidth is used than would be actually possible. However, I only pay for 50Mbit/s (which is way enough for my needs, and 10€ per month cheaper), despite actually getting 96Mbit/s. Wait... how does that work? Why would anyone pay more money then? Well, I transmit everything at 96MBit/s, but the provider will, after a very short time (less than 0.1 seconds) covertly block me, and only allow more data to be sent/received once enough time has passed so I'm within the quota that I paid for. Thus, on the average, I have my 50Mbit/s. Very similar things happen at several locations within the internet where your traffic will pass through, too (without you ever knowing). Traffic is being "shaped" according to importance, sometimes with unknown metrics, and (while controversial and disputed, see "net neutrality") according to who owns the cable and what people pay.
Bandwidth on the internet is, for the most part, so huge that -- except during multi-nation-wide DDoS attacks -- it is not a limiting factor in any way. Well, in theory, and in most parts of the world, that is.
There are however bottlenecks: One is at your end, the next obvious one is at the server's end, and there exists the very real chance that if you interact with a server in a different geographical location, especially a third world country, that total bandwidth will be significantly worse than either of the two. Some countries in south-east Asia have international uplinks that are not much higher than what a handful of individual home users have in other countries (or even in the same country). I don't know if this is still the case (things change ever so fast in the world), but for example in Thailand, accessing a server within the same country used to be 4 times faster than accessing a server in another country, for just that reason. The same would hold if you tried to access a server within their country.
Even though bandwidh within your location may be high, it is the slowest connection in the chain that limits how much data you can push through (just like in a water pipe). Longer distance means there is generally more opportunity for encountering a slow (or congested) link.
Latency
Latency is the time it takes a signal to arrive at your location (or any particular location) from some point.
First, there is the speed of light, which is (not) constant and, being a hard physical limit, cannot be worked around. Why am I saying "(not) constant"? Well, because reality is even worse than theory. The speed of light is really an upper bound, measured in vacuum. In a copper cable or even moreso in a fiber optic cable, the measurable speed of light is easily something like 30% slower than in vaccum, plus the actual distance is longer. That's not only because the cable is not in a perfectly straight line, but also because the light travels along the fiber zig-zag, bouncing off the walls (total internal reflection). It is a tough challenge (this means: impossible) to make the speed of light significantly faster. Not that you couldn't do that by using a different medium, but a medium with higher speed of light means changing the index of refraction, so you reduce, and eventually lose, total internal reflection. Which means unless the signal goes in a perfectly straight line, the signal doesn't arrive at the other end any more!
Thus, in summary, there is a more or less fixed delay which is unavoidable, and while not noticeable in local (LAN, or some few kilometers) transmissions, it becomes very noticeable as the signal goes across half a continent. In addition to this hard physical limit, there are delays introduced by intermediate routers, and possibly your local uplink (the infamous "last mile").
For example, on a typical ATM-based home internet connection, you have a delay of about 4 ms only for your datagrams being needlessly encapsulated in PPP and chunked up in 53-byte sized ATM frames, being sent over to the DSLAM, routed within the provider's ATM network, and being reassembled before entering an IP network again. The reason why this is done is historic. Once upon a time, ATM seemed like a good plan to enable low-latency high-quality phone calls over long distances. Once upon a time, that was in the 1980s, but alas, telecom providers move slowly.
Even for many installations that habe "fiber" in their name, in reality copper wire is used for the last dozen meters, the fiber not rarely ends in the street (though real fiber to the basement does exist).
A typical internet router will add something in the range of 0.05 to 0.2 milliseconds to your delay, but depending on how busy it is (any maybe it's not top notch), this might very well be a full millisecond. That's not a lot, but consider that having 6-8 routers in between you and the destination server is not at all unusual, and you may very well have 12-15 of them on a longer distance! You can try running tracert some.server.name
to see yourself.
A line that has been cut and tapped by the NSA or the SVR (so basically every main line going from/to the Asian continent, or across the Red Sea, Indian Sea, or Atlantic Ocean) will have at least another two milliseconds or so of latency added for the espionage stuff that they're doing, possibly more. Some nations are known to (or at least highly suspected) not only observe content and block certain IP ranges, but to even do some extensive active filtering/blocking of politically/ideologically inappropriate content. This may introduce much longer delays.
Thus, even for "nearby" locations, you can expect anything from 15 to 25 ms of delay, but for something in another country, you should expect ~100 ms, on another continent 150-250 ms, if you are unlucky 400-500 ms.
Now, despite all, it would seem like this doesn't make that much of a difference because this is only a one-time initial delay, which you hardly notice. Right?
Sadly, that is not entirely true. Most protocols that transmit significant amounts of data like e.g. TCP, use a form of acknowledge-driven bandwidth throttling, so the amount of data that you can push onto the wire depends on the time it takes to do a full round trip (there and back again). This is not 100% accurate because TCP attempts to optimize throughput by using one of several rather complex windowing algorithms that send out a couple of datagrams prior to waiting for acknowledgement.
While this can somehow mitigate the effect, the basic principle however remains: What you can send (or receive) is finally bound by the time it takes for acknowledgements to come in. Some other protocols with more stringent realtime requirements and less important reliability requirements (think IP telephony) use a different strategy with different issues (which I will not elaborate).
You can see what a big impact latency has if you compare a poor TCP implementation (Microsoft Windows) with a better one (Linux). While they both speak the same protocol and seemingly do the exact same thing, they do not cope with latency compensation equally well.
I own a desktop computer (6700K processor, 64GB RAM, Windows) and a Synology DiskStation (low-power ARMv8 chip, 1GB RAM, Linux). The desktop computer, connected to the same router, while being many times more powerful, cannot fully saturate the 50 Mbit/s line when downloading from national or within-EU servers (15-20ms RTT), even with several concurrent downloads in flight. The meek DiskStation has no trouble with completely saturating the line on a single download, getting 15-20% more throughput -- same cable, same everything.
On my local area network (where latency is well below a millisecond) there is no noticeable difference between the two. That's the effect of latency.
Speed... again
In summary, yes, you can expect "speed" to go down as distance increases, mostly because latency increases, and to some extent because you may have lower bandwidth connections in between. For the most part, the effect should however be tolerable.