Contact us
Joel Yourstone

Performance, it’s a mindset

Apr 1, 2018 10:40:00 AM

I’ve read a lot of blogs recently about performance for (mostly) commerce solutions. This time of year it is crucial, as it is the yearly organic load test for almost all people selling things online.

I’ve read a lot of blogs recently about performance for (mostly) commerce solutions. This time of year it is crucial, as it is the yearly organic load test for almost all people selling things online. I’m of course talking about the combination of Black Friday (weekend/week), Cyber Monday and the Christmas sales. This period of roughly one month can completely decimate an ecommerce vendor if they are not performant enough. Not only talking about completely being down, but as well that conversion rate is proportional to how fast your website is. I.e. people are too lazy to wait another second and will go to another competitor in another tab instead.

So who determines if a site is slow or fast? Is it the Newrelic APM threshold thingy? Google PageSpeed Insights? No. It’s the customer.

That maybe makes it harder, some might think, but for me I believe it’s easier to work with. It is subjective. But also quite objective. UX designers and performance geeks are all quite agreed on what ultimately is a nice experience on a web page. Pure performance metrics absolutely has its part here, but more importantly the user’s perception of performance.

There are a lot of great people talking about performance, here again especially around this time of the year, targeting the backend part of the application, which to be fair is the part you need to have under control to avoid downtime. Noteworthy mention is Quan Mai’s blog, he writes a lot about performance, especially in the data layer of a site (i.e. SQL Server stuff). But I see far too few posts about performance on an architectural level. Can we challenge the architecture we have to make something more performant? Or can we even just make something that seems more performant?

Fake it til you make it

On several Epi Commerce implementations we’ve made at Avensia, I’d like to say that we’ve had this phrase in mind when it comes to performance a lot. Of course we take the needed steps in order to have a performant system generally, but I think we’ve done quite a lot of work with the perceived performance.

An example that we’ve tried successfully to implement:

When we are on a category page and see a list with products. Once we click a product, the expected behaviour from the user is that they will load and see the product page. And that’s what we’ll do, but in this case, we can be smart about the data management. We already partially have the data that we’re displaying on the category page. In the small product card on the category page, we have the price(s), an image and the title of the product. That’s essentially 80% of the mobile viewport on the top of the product page. So why not just show that data instantly?

When a user clicks on the product card, we take the data we have and instantly renders the product page with this data. For the user, the page load is instantaneous. However we know that it’s not the real truth, we haven’t loaded everything yet. Chances are that we will have enough time to download and render the full product page in the background while the user reacts to the new screen of info, so when they start to scroll down, it’s all there!

Page load is not ultra fast, maybe 100-200ms. But we are buying time to have a chance of “fooling” the user to believe it was 0-4ms (time to render the new HTML). This is a perfect example of a “fake it til you make it” approach that really works. I can say that this performance is better than the one we had before, even though the numbers are exactly the same. Because again, performance is partly about perception!

There are tons of other things you can do with data management and on an architectural level to work with this brand of performance. This said, we can’t just bypass the other performance work. We can’t fake it all the way. There’s a reason that we’re getting 100-200ms product page load at 20 simultaneous user as well as the same page load at 3000 simultaneous users. And that’s not perceived!

Truth it til you make it

I’m writing a blog post about Epi Commerce performance, I have to touch performance that isn’t just perception as well. Again, I won’t go much into detail, there are tons of other posts that already explain a lot, but here goes some of my personal favourites.

  • Maintain your indexes
    Or make sure someone else does it for you. You don’t need application level insight to maintain one, however you do need application level insight to create one. So looking out for indexes that should be created is a job for the developer!
  • Don’t load things more than necessary
    This is super important, but still I can say more than half (if not way more) of Epi Commerce sites don’t follow through on this point. One clear example is the cart. When do you need to reload the cart and re-calculate promotions? If you can properly answer this question and that answer maps 1:1 to your implementation you’ve done a great job!
  • Delegate heavy things that the user don’t need direct feedback on
    An example - the user adds something to the cart. Is there anything the user wants in return more than the item added to the cart? I can only think of one thing, an updated subtotal in the cart that have run promotions with the new item in the cart. But all other parts, as in actually creating the item in the cart in frontend should probably be done instantly, and only a small spinner on the cart total. The user don’t need to wait with everything for the AddToCart response, that thinking can be done in the background.
  • Make sure your hardware has the correct dimension in the correct places
    Hardware is a bit tricky with Epi Commerce in my mind. In theory, the system only has one bottle neck, which is the database. So make sure you have the processors/disks/memory needed for your needs. But also make sure that all scalable parts of your system are easy to scale and aren’t under dimensioned, forcing you to scale unnecessarily much.
    Another thing to mention is that the cache is completely memory based, so if you’re not using a separate cache server like Redis, you’ll want quite a lot of memory on each web front. You don’t want cache that’s evicted too early due to memory limitations!

Finding things that cost a lot fetching or managing can be tricky, but smart use of performance tools will help here. I could talk a lot about specific findings we’ve done with SQL Profiler and New Relic APM Thread profiling, but to keep this post’s topic I’ll do that in a separate blog post.

These points have all come from experience. Some bad experiences, some good, but you live and you learn!

Combining the two approaches…

… makes you have a different mindset when it comes to performance. Performance is architecture, architecture is performance. Even if you are a frontend developer, it’s good to have the performance mindset to think data management and have solutions for where to store and manage your data.

Performance really is a mindset that one needs to have as a developer and continually challenge and improve. I’m far away from being an expert in these fields, but I try to embrace this mindset as much as I can, to try to create performant cutting edge solutions!