Last updated on June 13th, 2020 at 08:54 pm
Memcached is a general-purpose distributed memory caching system and memcache is the top level application or module that uses memcached service. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read.
What Memcache do is not much complex to understand. It will cache data and upon call, instead of hitting the database or pages, from so on data can be served from memcache directly. This is faster than hitting the DB or disk, as because the cached data stays in RAM, on the other hand query always slows things down if the DB is containing tons of data and getting a lot of hits per given time.
As memcache saves the cached contents in RAM, it is required to have enough RAM to run memcached, otherwise terrible things may happen. However,
The servers keep the values in RAM; if a server runs out of RAM, it discards the oldest values. Therefore, clients must treat Memcached as a transitory cache; they cannot assume that data stored in Memcached is still there when they need it.
This explains that memcache can manage RAM itself by its own definition.
Please carefully read the following page explaining Hardware Requirement
Why you need it
The reason of slow download speed of web contents is not always due to the script but also the server performance. There is more than one thing to bring in consideration when discussing about websites performance.
- The contents that is being downloaded are not optimized well (Script Issue)
- The server serving the data have less configuration.
- Overflow of requests at a time to the server, which is more than the server can handle at a time.
- Less RAM to hold cached data.
- Moreover there is no cache at all !
- User is having slower internet connection.
- Too many contents in a single page (Page layout issue)
Memcache can reduce hits to DB by saving data when it was first time requested from DB in RAM, and then for next calls of the data, it serves the data from the RAM instead of searching the data in DB.
However, also consider the following
Lets say you have a cache hitrate of 90%. If you have 10 memcached servers, and 1 dies, your hitrate may drop to 82% or so. If 10% of your cache misses are getting through, having that jump to 18% or 20% means your backend is suddenly handling twice as many requests as before. Actual impact will vary since databases are still decent at handling repeat queries, and your typical cache miss will often be items that the database would have to look up regardless. Still, twice!
Will it solve the performance issue?
- It is supposed to. It is the best available option we have now (If your server administrator decides that the server is capable of running memcache).
- This is completely a different solution and there is nothing that can be done instead of using it. It has its own rule and nature. Its fails on its terms.