Rails is a delightful framework, and Ruby is a simple and elegant language. But if it is not used properly it will affect performance quite a bit. There is a lot of work that is not suitable for Ruby and Rails. You’d better use other tools. For example, the database has obvious advantages in big data processing, and the R language is especially suitable for statistical work.

Memory issues are the number one cause of slowdown in many Ruby applications. The 80-20 rule for Rails performance optimization is this: 80% of the speed is derived from memory optimization and the remaining 20% ​​is due to other factors.

1. Why is memory consumption so important?

Because the more memory you allocate, the more work the Ruby GC (Ruby’s garbage collection mechanism) needs to do. Rails already takes up a lot of memory, and on average, each application takes up nearly 100MB of memory just after it starts. If you don’t pay attention to memory control, it is quite possible that your program memory grows by more than 1G. Need to recycle so much memory, it is no wonder that most of the time the program is executed by the GC.

2. How do we make a Rails app run faster?

There are three ways to make your application faster: capacity expansion, caching, and code optimization.

Capacity expansion is now easy to achieve. Heroku basically does this for you, and Hirefire makes this process more automated. Other hosting environments offer similar solutions. In short, if you want, you can use it. But keep in mind that expansion is not a silver bullet to improve performance. If your application only needs to respond to a request within 5 minutes, the expansion is useless. There is also the possibility of using your Heroku + Hirefire to easily overdraw your bank account. I have seen Hirefire expand my application to 36 entities, and I paid $3100 for it. I immediately reduced the number of instances to 2 and optimized the code.

The Rails cache is also very easy to implement. The block cache in Rails 4 is very good. Rails documentation is great for caching knowledge. However, compared to capacity expansion, caching is not the ultimate solution to performance problems. If your code doesn’t work as expected, you’ll find yourself spending more and more resources on the cache until the cache doesn’t bring speed.

The only reliable way to make your Rails app faster is code optimization. This is memory optimization in the Rails scenario. And of course, if you accept my advice and avoid using Rails outside of its design capabilities, you’ll have less code to optimize.

Some features of Rails cost a lot of memory and cause extra garbage collection. The list is as follows. S

  • Serialization Procedure

A serializer is a practical way to represent a string read from a database as a Ruby data type. This will only cost 2 times the memory overhead. Some people, including myself, see Rails’ JSON serializer memory leaks, about 10% of the amount of data per request. I don’t understand the reason behind this. I don’t know if there is a situation that can be copied. If you have experience or know how to reduce memory, please let me know.

  • Activity Record

It’s easy to manipulate data with ActiveRecord. But ActiveRecord essentially encapsulates your data. If you have 1g of table data, ActiveRecord means it will cost 2g, and in some cases more. Yes, in 90% of cases, you get extra convenience. But sometimes you don’t need it. For example, batch updates can reduce ActiveRecord overhead. The code below does not instantiate any models and does not run validation and callbacks.

Book. where ( ‘title LIKE ?’, ‘%Rails%’ ).update_all(author: ‘David’ )

  • String Callback

Rails callbacks like before/after save, before/after actions, and a lot of usage. But the way you write this may affect your performance. There are 3 ways you can write this, for example: callback before saving:

before_save : update_status
before_save do | Model |
Model.update_status
End
before_save “self.update_status”

The first two methods work well, but the third one does not. why? Because executing a Rails callback requires storing the execution context (variables, constants, global instances, etc.) at the time of the callback. If your application is large then you end up copying a lot of data in memory. Because callbacks can be executed at any time, memory cannot be reclaimed until the end of your program.

There is a symbol that the callback saved me 0.6 seconds per request.

  • Write less Ruby

This is my favorite step. My university computer science professor likes to say that the best code doesn’t exist. Sometimes the task at hand requires other tools. The most common is the database. Why? Because Ruby is not good at handling big data sets.It is too bad. Remember, Ruby takes up a lot of memory. So for example, you might need 3G or more of memory to process 1G of data. It will take tens of seconds to garbage collect this 3G. A good database can process this data in one second. Let me give some examples.

  • Property preloading

Sometimes the properties of the denormalized model are taken from another database. For example, imagine that we are building a TODO list, including tasks. Each task can have one or several tag tags. This creates objects for each tag and costs a lot of memory. An alternative solution to preload the tags in the database is that it only requires an extra column of memory storage and an array tag. No wonder it’s 3 times faster.

  • Data Collection

We are referring to any collection of data to summarize or analyze the data. These operations can be summarized briefly, or some more complex. Take the group ranking as an example. Suppose we have a data set of employees, departments, and wages. We want to calculate the employee’s salary in a department’s ranking where we can calculate the ranking in Ruby.

Optimizing Unicorn

If you are using Unicorn, the following optimization tips will apply. Unicorn is the fastest web server in the Rails framework. But you can still make it run faster.

Preloading the App

Unicorn can preload the Rails app before creating a new worker process. This has two advantages. First, the main thread can share the memory data by copying the friendly GC mechanism (Ruby 2.0 and above). The operating system will transparently copy this data in case it is modified by the worker. Second, preloading reduces the time it takes for the worker process to start. Rails worker process restarts are very common (more on this later), so the faster the worker restarts, the better we can get performance.

To enable preloading of the app, just add a line to the unicorn configuration file:

Preload_app true

GC between Request Requests

Keep in mind that the processing time of the GC will account for up to 50% of the application time. This is not the only problem. The GC is usually unpredictable and will trigger when you don’t want it to run. So what should you do?

First we will think, what happens if the GC is completely disabled? This seems to be a very bad idea. Your app is likely to fill up to 1G of memory very quickly, and you haven’t found it yet. If your server is running several workers at the same time, your application will soon run out of memory, even if your application is on a self-hosted server. Not to mention Heroku with only 512M memory limit.

In other words, the user will obviously feel the performance improvement. But the server needs to do more work. Unlike running GC on demand, this technique requires the server to run the GC frequently. So, make sure your server has enough resources to run the GC, and that there are enough workers in the process of other workers running the GC to handle the user’s request.

Limited growth

We have shown you some examples of applications that take up 1 GB of memory. If your memory is enough, taking up such a large chunk of memory is not a big deal. But Ruby may not return this memory to the operating system. Let me explain why.

Ruby allocates memory through two heaps. All Ruby objects are stored in Ruby’s own heap. Each object occupies 40 bytes (in a 64-bit operating system). When an object needs more memory, it allocates memory in the operating system’s heap. When the object is garbage collected and released, the memory of the heap in the occupied operating system will be returned to the operating system, but the memory occupied by Ruby’s own heap will simply be marked as free and will not be returned

This means that Ruby’s heap will only increase and will not decrease. Imagine if you read 1 million rows of records from the database, 10 columns per row. Then you need to allocate at least 10 million objects to store this data. Usually Ruby workers take up 100M of memory after booting. In order to accommodate so much data, the worker needs to add an additional 400M of memory (10 million objects, each object occupies 40 bytes). Even if these objects are finally reclaimed, the worker still uses 500M of memory.

Conclusion

Everyone always said that Rails is so slow which has almost become a common problem in the Ruby and Rails community. However, as a Ruby on Rails development company we know this is actually not true as long as Rails is used correctly then it is not difficult to increase your application’s speed by 10 times.


About the Author

QuickBeyond
Quick Beyond is web and mobile application development company offering a wide range of IT services & solutions revolving around Rub on Rails application development, Full-stack development, top-notch JavaScript development and on-demand solutions. We are renowned for offering bespoke web and mobile application development services from SME to large-scale enterprises.