To demonstrate scalability at full load, we compared the two versions of 4D Server 12.1, 32- and 64-bit, connecting 510 clients. Each client executed queries/creations/deletions in two different processes. Statistics were collected during those operations.
Usually, scalability is not about speed. Being able to allocate more memory does not mean that an application will run faster, but that it will support a heavier load. However, there are situations where an operation will run faster just because 4D Server could use more RAM. A typical example is the cache size. The 32-bit version can allocate a maximum 2.3 GB of cache. (The remainder is used for the engine: handling connections, processes, users, code, etc.) There are no limits with the 64-bit version, which means it is now possible to fill the cache with all the data (and indexes).
To demonstrate scalability's effect on performance, we compared the time it takes to perform a sequential sort using the 32- and then the 64-bit versions of 4D Server v12. A sort needs memory to store the temporary data used in comparisons. 4D Server does not use the cache for this kind of operation; it allocates the memory in the “engine memory” (any part of the virtual memory 4D Server can allocate outside the cache). We set the size of the cache to the same value in both environments.
Until now, it has been up to each developer to create the mechanisms and workflow for accurate data synchronization and replication, leading to a variety of implementations and, of course, a lot of extra development.
The following sync functions are now integrated: