Skip to main content
Kofax

Overview of how robot execution affects CPU utilization

Summary

12443

The number of robots you can run concurrently on a CPU depends on the processing power of the unit and the response time of the sites you are running against. A thread in a program is either in a running state (and uses 100% of the processing power...a core) or a waiting state (and uses 0% of CPU). The average CPU usage of a robot is therefore based on the wait to running ration (this is true for all software). Basically the faster you can get the data from the remote server, the more CPU will be used. The robot is not waiting for data to process and will be running nearly all the time during its execution.

As an example, if one robot takes a total of 10 seconds to run and spends 4 of the 10 seconds waiting for data from the remote website, it is using 60% CPU on average. That is just one robot. In our test lab, we have a robot which collects flight information from the EasyJet website which is ultra responsive. As a result of how quickly the site responds to the robot, one instance of the robot will consistently utilize all CPU resources because it doesn't have to wait for data. If we try to run 3-4 concurrent instances of this same robot, we can max out the test RoboServer running on a single core of a 2.6 GHZ Centrino with only those 3- 4 concurrent instances. The CPU utilization is very high because each instance of the robot (i.e. thread) is always running and not having to wait for data.

On modern day websites a lot of the execution time in a robot is actually spent executing JavaScript. It is possible to disable the JavaScript execution for a robot, but many sites now rely on JS to parse XML and render content. Unfortunately, the execution of that Javascript is also where most of the CPU power is spent.

A couple of other things about performance. If you are running on a multicore CPU, it is recommended to start one instance of RoboServer per core. Tests have shown that 4 instances of RoboServer running on Windows Server 2003 (dual opteron 280 8GB) performs 170% times better than a single instance. This is largely due to single threaded garbage collection and Windows ability (or lack thereof) to distribute threads within a single process across multiple cores.

On Linux, the 4 instances performed 70% percent better. Also, Linux was 30% faster than windows on the hardware in question.

You should not use the DefaultRobotLibrary unless you run collection robots with 10+ minutes running times. When you use the default library the server doesn't cache the robot file, so it is reads and parses it from disk every time. For large robot this adds 1-4 seconds of load time (100% CPU), and even more if the disk system is under load from a high volume of concurrent requests.

If you use the EmbeddedFileBasedRobotLibarary or URLFileBasedRobotLibrary the server will be able to cache the robot and load times will be 0.1 ms, saving a lot of time in systems with high request frequency.

With regards to hardware, robots tends to be CPU intensive (except for some clipping setups). Kapow recommends Intel or AMD CPU, as these have a higher clock frequency, which yields lower response times. Budget at least one 1GB of memory of per core + one 1GB extra for the OS.

Keywords: Kapow, CPU, Kapow, Robotic Process Automation