Skip to content

Properties and Limitations of NFS

The typical usecase for NFS is a central storage for web server data, when synchrozation is not taken care of by the application or part of the deployment process. Its purpose is to have all application data in one place, so that multiple webservers can access it simultaneously and get the same data. Central storage can be provided by sofware implementation of the NFS protocol for small projects, or business grade hardware solution (either dedicated or provided as service) for larger ones.

In either case, for a trouble-free experience, you need to follow several rules. Most importantly, this type of storage is not appropriate for write-heavy workloads. One of the common problems is using central storage for logs or application cache. You should either save logs locally, or use remote syslog or elastic stack to send them to a dedicated server. In the case of application cache, it needs to run on each webserver separately. If you require a shared cache, you should save it to a NoSQL database, such as Redis.

Sometimes, we also encounter bottlenecks with enormous amount of reads. That is usually caused by a bug in the application, or incorrect design. One of the leading causes is a huge number of includes in code. Because every disk operation takes longer on a network storage, if you need to load tens of thousands of files on every page load, it has a huge negative impact on perceived performance. This is entirely avoidable. Naturally, this issue is also present, when you use local storage. But due to lower latency, it is usually not noticable. A useful tool for debugging these kind of problems is xdebug. However, xdebug itself causes a significant performance hit, so we would advise to use it on a staging environment, or during a maintenance window.