Magento Open Source v1 covering Magento versions up to 1.9.x is still the eCommerce platform for many small businesses and whilst we all know we need to move to Magento 2 the time and resources required to migrate a successful v1 shop to v2 are for many people prohibitive.
PHP7
The original system requirements for Magento 1 were based on PHP5. Magento 1 users were forced to stay with an operating system that distributed PHP5, a PHP upgrade would break Magento. There were unofficial patches available that would make Magento compatible with later versions of PHP but I was always wary of applying unofficial core code changes to my live shops.
My Magento live Shops were built on Ubuntu Trusty 14.04 LTS which included PHP5. My Docker containers used Trusty as the main OS for my docker images. Now, with the 1.9.4.0 update we can move to current versions of PHP and enjoy the performance benefits of PHP7.x.
PHUSION
As a big fan of Ubuntu I usually build my Ubuntu Docker containers using the Phusion Ubuntu base image. For my Magento2 containers I compile PHP myself so for Magento 1.9.4.0 it was pretty easy to migrate a Magento2 Dockerfile to use for Magento1 on PHP 7.2.
Upgrade
Upgrading my Docker Magento1 PHP5 shops to PHP7 was simply a case of applying the 1.9.4.0 upgrade, replacing my old Ubuntu Trusty base image with the Ubuntu Bionic PHP7.2 image and restarting.
I am still testing all my extensions and associated PHP5 Magento application code for functionality but so far the upgrade looks good.
You can find my Magento 1.9.4.0 PHP7.2 image here. My Dev shop running Magento 1.9.4.0 with PHP7.2 in Docker is here.
If you are a Magento 1 Ecommerce Store owner you are probably well aware that support for Magento CE 1.x will stop in 2020.
The nice folks at Magento call it “Sunsetting“
We have all known this for some time, exactly what that means for Magento 1 stores is open for debate, I doubt the world is going to end but are the risks associated with running deprecated ecommerce store software acceptable?
Magento 2 has been maturing nicely over the last few years and now is definitely as good a time as any to start working on your migration plan.
The costs and time associated with migrating a Magento1 store with “minimal” third party integration are estimated at between $5,000 to $60,000 – 3 to 6 months. Even a very basic migration will require a new custom theme, modules and data export.
The total cost of migration depends of course on the in house skills you or your company possess. Chances are if you are reading this you already have Magento 1 and PHP experience. That’s good news! If you are new to Magento 2 the bad news is that it’s a steep learning curve to get up to the same speed with Magento 2 that you have right now with trusty old Magento1.
Time invested now in setting up an Magento2 development environment is time well spent. I spent the last couple of years dabbling with Magento 2when I had the time – most of my time was spent getting my Magento 2 Docker environment right and testing migration methods.
Migrating a mature Magento 1 store to Magento 2 is a daunting task. If your store has been around for a long time (mine started life as Magento 1.3 in 2013) no doubt you will have a very customised theme and installed or developed a lot of custom modules and features over the years. You may have 1000’s of products, 10’000s of customers and hopefully a lot of order data.
So where do you start? In a perfect world you would start with the official Magento 2 migration plan and within half an hour and a few clicks your Magento 1 store is automagically migrated to Magento 2. This fairy tale migration only really works on vanilla Magento 1 installs with no customisation. Having said that as of the latest Magento 2 (2.3.x) the migration process has matured and improved. It is definitely worth familiarising yourself with the migration tool as you probably will need it to migrate some of your Mage 1 data.
For me the migration tool worked perfectly for customer data, but not for products. And of course unless you want to use the basic Magento 2 theme, theme migration is not included.
Once you accept the fact that you are going to have to pretty much rebuild your catalog from scratch it’s time to step back and have a think about how to rebuild 1000’s of products.
Having worked with Magento 1 for almost 10 years I was used to exporting product data for use in other applications such as Amazon and eBay product feeds, so I thought why not simply export the Magento 1 data in a format that Magento 2 can import.
This is not a bad idea, but we are talking about a lot of data – thousands of products, multi store views, lots of custom attributes. That’s a pretty big CSV file to manage.
A much better idea is to create a “skeleton” export of your Magento 1 product data – just the bare bones of the products, enough for Magento 2 to import them with enough data to create the product.
Once you have the basic product structure imported simply synchronise the data between MAGE1 and MAGE2! Sounds complex but actually it is pretty easy.
We just need two PHP scripts, one with access to the MAGE1 source files and database and one with access to the MAGE2 installation. The MAGE2 script calls the MAGE1 script to request product data for a SKU. The MAGE1 script extracts all the product data you need into an array returning it in JSON format to MAGE2 which in turn updates the corresponding MAGE2 skeleton product. This is done for each store view.
There are some complications associated with configurable products, attribute sets and product attributes which you need to create and import, but once the basic product is in MAGE2 the synchronisation process works perfectly.
At the time of writing I can synchronise all data for all product types including, pricing, tier pricing, custom attributes and all product images.
I can keep Magento2 in sync with changes or updates to the live Magento1 shop simply by restoring the latest database backup to my Magento1 migration database and performing a new product sync.
I started my Migration about 6 weeks ago (May 2019) and having developed the import and sync process I am just about ready to start on a “live” dev build of my new Magento 2 store. I don’t have a new theme yet but there is a lot of work to do getting the catalog ready and customising core templates and code.
I will be updating this blog with my progress and hopefully developing the MAGE1 to MAGE2 export and sync scripts to be made available to download.
Stay tuned to watch this dummy migrate a Magento 1 store to Magento 2!
Whilst migrating from Magento 1 to Magento 2 you might want to quickly recreate product attributes in Magento 2 programatically. Adding a lot of product attributes manually can be very time consuming so using the setup script in one of your modules let’s you quickly create new attributes.
An example of the install method is shown below with code for a confiurable product dropdown attribute and a text attribute.
To run the install script use php bin/magento setup:upgrade
To force the install script to run again you can delete the module entry in the setup_module database table and run setup:upgrade again. I guess a cleverer way would be to use the upgrade feature but this is really just intended as a one off process to perhaps install 50 product attributes that would otherwise be time consuming to manually configure.
Note the group name let’s you add these custom attributes into their own group which makes them more manageable in the admin interface.
A few years ago the nice people at Amazon started to offer unlimited cloud storage for Amazon Prime customers on their so called Amazon Drive service. A lot of people, including me, jumped on board – I mean it was unlimited storage – with emphasis on the UNLIMITED.
This was great – for about a year, lots of people were boasting about the many, many, many Terabytes of data they had stored on Amazon Drive. For me it was a perfect solution to my cloud backups. I subscribed to ODrive and synced all my working data to Amazon Drive.
Then one day Amazon announced that Amazon Drive for Prime customers would no longer have unlimited storage, and would no longer be free. In fact 1TB of storage would cost $99.99 per year. They had dangled a nice free juicy worm in front of me, I had bitten and now I was well and truly hooked – hook line and sinker.
What choice did I have? All my data was already stored on Amazon Drive so I swore a lot, got out my credit card and paid up.
Another year or so later (2019) it’s time to pay Amazon another 100 bucks and I notice that Amazon Drive is now called Amazon Photos, without really informing their customers Amazon have rebranded it into a photo and video storage service. Not exactly what I expected from a cloud storage solution.
As soon as I discovered Google Drive offering twice the amount of storage (2TB) for the same price I decided it was time to migrate my ODrive from Amazon Drive to Google Drive. The problem is, with almost 1TB of data in the cloud – even with reasonably fast home DSL internet bandwidth how do you move so much data around? To download it all, and then upload it all would take days – and I don’t even have enough disk space on my shiny silver macbook pro to temporarily store it all (the reason why I moved to cloud storage in the first place.)
Here is the solution – How To Migrate your Terabytes in Hours not Days
Spin up a bare metal Windows server
I used a Win2k16 Server on a Packet c1.small.x86c1.small.x86 with super duper gigabit bandwidth.
Install the AmazonPhotos app.
Using the Amazon app restore your files to a local folder.
Login to Google and upload the restored files to Google Drive.
In Odrive unsync Amazon Drive synced folders and resync them to Google Drive
FIN.
I moved all of my data in a few hours, and it cost about $30.
Goodbye Amazon, please don’t try and trap me into a shitty service again.
I wasn’t able to use the Magento 2 migration tool for products so one of the tasks on my TO DO list was product review migration. Looking around for (free) solutions the only stuff I came across was no longer free or didn’t work. $100 to import 500 reviews – I don’t think so.
If you are looking to do something similar here is a quick copy and paste of the php code I used for the Magento 1 export, and the Magento 2 import.
I am assuming you can plug this into your existing Magento php cli tools which already bootstrap the corresponding Magento1 or 2 core classes. For Magento 1 I first took a full product collection and parsed each product for review data. All the data is saved to a tab seperated CSV file which is then used for the Magento 2 product review import.
We run a small business with a small IT budget (i.e. zero) and I am migrating a Magento 1.x ecommerce website to Magento 2 – which is no mean feat!
It is now January 2020, in my last post from June 2019 I had just about completed the import of approximately 5000 products using my own migration scripts to help me export skeleton product data from Magento 1 into Magento 2 and then synchronise the full product data.
I had expected this project to take at least 6 months, it’s now 7 months later and I am still not finished! So what happened in the last 6 months?
Firstly I was extremely busy and unfortunately could not allocate 100% of my time (or even 50% sometimes) to the project, over the months between September and December it was even less which naturally meant that a lot of migration tasks took a lot longer than I had initially expected.
After getting my product import and sync processes working smoothly I started to look for a suitable Magento 2 theme. It’s natural when upgrading from one shop to another that you want to try and keep things looking the same, after all – there was nothing wrong with the old shop look right? Magento 2 migration is more than an upgrade, it’s a complete technology refresh – so it makes sense to try and improve on as many aspects of your old store as possible. My frontend development skills are ok, but to create a new Magento 2 theme from scratch with all the basic features you expect from a commercial theme is a whole new project in itself. In my view it makes much more sense to buy a theme that meets your basic requirements and then customise it to meet your needs.
When it comes to buying commercial Magento themes on a budget there is a lot on offer but my advice is to choose carefully and be prepared to buy three or four themes before you find the one that you will develop to become your Magento 2 store theme. A basic Magento 2 theme is not expensive, expect to pay between $100 and $400 for a good theme on Themeforest or the Magento Marketplace. However, buyer beware! Not all theme developers are the same. On a site like Themeforest look carefully at the developers sales figures, reviews and comments. A theme with 20,000 sales must be doing something right! Pay attention to the version of Magento 2 the theme supports, make sure it is up to date – Magento often make theme breaking changes with minor releases, there is no guarantee that a Magento 2.3.2 theme will work on 2.3.3 unless the developer says so in the release notes. There is no way to try before you buy, so purchase the theme and evaluate it. If it does not function as it is supposed to there is a good chance you will get a refund from Themeforest if you state your case clearly.
What makes a good theme? Look for a theme that gives you lots of design options, some themes have over 20 different designs included in the theme to choose from. Look for bundled extensions such as ajax layered navigation but beware – if the bundled extensions are not up to date you may have problems getting support for them. Avoid gimmicks like frontend gui editors that are overly complex and might not required if you are a developer.
Whilst building my Magento 2 development environment I made the decision to use Elasticsearch as the catalog search engine, Magento state that use of the mysql database as the Magento search engine will be deprecated. Because of this theme selection and testing process took over two months as the first three themes I tested whilst claiming to be compatible with Elasticsearch were not. It soon becomes clear when talking to theme developers what level of PHP and Magento skill they have (or don’t have). One theme developer who was very keen to resolve my Elasticsearch problem for me simply changed the catalog search setting back to mysql in admin without telling me! It took me a few days to notice that, and afterward I ditched the theme. Having found a suitable theme I started to develop the look of the theme and product presentation.
It was around about this time that we realised we could make use of Magento 2 visual swatches a lot more in our products. Our Magento 1 shop has a lot of grouped products, and a lot of configurable products with dropdown option selection. We made the decision to convert configurable products to use visual swatches with options for size and colour. This meant I had to redesigning hundreds of products and basically delete and reimport them. Magento 2 is not a Shop upgrade, it’s a Shop refresh – take this opportunity to improve your Magento product presentation as much as you can.
I systematically worked through our product categories exporting Magento 1 product skeleton data, modifying it manually in Excel, importing it in Magento 2 and then synchronising content, pricing, images etc. By the end of November the first three categories were complete – never underestimate the time required to process and import Magento product data! Even with spreadsheets, scripts (and MAGMI) it’s a lot of work. If your store has 100 simple products manually editing them is doable – with 1000’s of products manual product editing is just not viable.
So here we are in January 2020, I have more time now to spend on the project and I am (reasonably) confident that we will launch by March 2020!
Here are my lessons learnt over the last six months
Don’t try and upgrade your Magento 1 store to Magento 2. Use this opportunity to refresh and redesign your store! Think outside the Mage1 box…
Create a good product data workflow that allows you to quickly bulk import and modify products
Use MAGMI – yes it works with Magento 2!
Evaluate commercial themes very carefully, make sure they are completely compatible with the latest Magento 2 release before you commit to developing.
Always create your own child theme referencing the commercial parent theme, make all your changes in your child theme NOT the parent.
Use a vcs like Bitbucket to commit your child theme and code changes to, use this for the parent theme also – it’s a great way to control parent theme changes when a new theme version is released.
TEST – always have another dev box you can spin up with a copy of your latest data to test any updates and changes that might break the database or your code!
MAKE BACKUPS, and take backups of the backups. Losing this much work is unthinkable…
I am starting to see light at the end of my migration tunnel – still a lot of work to do with products and theme customisation and a whole lot of testing – not to mention new hosting and go live planning, and, and – AND…
Gmail lets you Send emails from a different email address or alias so that If you use other email addresses or providers, you can send email from that address via Gmail.
To do this you configure an alias account in Gmail settings configured with the credentials of your mail service. This is useful if you want to consolidate email accounts or if you have a private email server and want to use Gmail to send email via this server using your various private email address domains.
At the start of April 2020 Google rolled out a security update that affected mail delivery using third party accounts. All emails sent via the alias account would not deliver, bouncing back with the error TLS Negotiation failed, the certificate doesn’t match the host.
TLS Negotiation failed, the certificate doesn’t match the host.
Google appears to be enforcing a new email encryption policy for secureTLS connections including validating that the host name on the mail server TLS certificate matches the canonical hostname (MX record) of the third party mail account.
If the host name does not match you can no longer use an encrypted TLS connection in Gmail to send email via your (or your ISP’s) mail servers.
For example, if your MX record resolves to mail.domain.com but the TLS certificate presented is for smtp.domain.com then Gmail will not connect to your mail server. For some users the only option to get mail working again is to revert to an unencrypted connection – strange that Google even allow that!
Google also no longer accept self signed certificates for TLS mail connections.
I use an EXIM4 docker container for my private mail relay, and use Gmail as the hub for my email send/receive. To workaround this problem I created a docker Certbot container and issued new LetsEncrypt TLS certificates for all my private mail domains used with Gmail as well as the primary TLS certificate for my Mail server.
I can confirm this resolves the problem and third party provider email sending via Gmail is now working again.
For anyone using Exim4 the way to configure Exim to use multiple TLS certificates is to dynamically match them to your mail domain, I did this using
Thousands of people have been affected by this. Considering the amount of people working from home or struggling to work at all during the Corona Virus pandemic its really bad timing by Google to implement new email security policies that are service affecting for a lot of users.
It’s Easter 2020, we are in the middle of a global pandemic, locked down at home in self isolation and the world is in utter chaos. In other news my first Magento 1 migration is complete and taking orders!
At the start of the year I still had an awful lot of work to do but was confident with more time to dedicate to the migration we would go live in February. As always things take a lot longer than planned. We spent a lot of time working on new product images and getting product and category presentation right. These are important aspects of your migration plan, never forget that as a merchant you must sell products, the most up to date, optimised, speedy and technically brilliant Magento 2 store is worth nothing if no one wants to buy your products!
I also spent a lot of time testing my go live plan. This is where my docker development infrastructure really helped as copying over backups of my dev docker containers and data to another virtual server and creating a complete working copy of my development environment took a matter of minutes (I can recommend using cheap 120GB SSD drives for this). This enabled me to test all the important stuff such as customer login, account creation, orders and payment methods. Over and over again!
You cannot test too much, and even with the most thorough testing you will miss issues. I had to to roll back to 2.3.3 because of a 2.3.4 issue I almost missed…
I had a couple more bad experiences with module developers where I purchased modules and they did not work correctly. Whilst I am not a big fan of the Magento Marketplace you do have a better chance of obtaining a refund via the Marketplace than from a developer site directly. One module was so badly coded that there was no way I was going to use it in production. The developer would not budge on a refund, and even PayPal refused to give me my money back – you live and learn.
I developed Magento 2 modules to migrate custom functionality including dynamic SEO tag generation, free product samples and buy X get Y functionality.
We purchased a new hosting server at the beginning of March (lots of RAM, lots of Cores) and the go live date slipped a little bit more to the middle of March. The changeover from old site to new was pretty flawless and only took about an hour, DNS updates took a bit longer but it was a relief to see customers logging in to their accounts without any problems and the first orders coming in.
In the first week after go live there were only a couple of problems that arose including a nice Magento 2 bug that prevents customer emails from being processed if they contain non ascii characters! I will write that one up in another blog post. We also noticed an issue with Tier Pricing that we failed to identify during testing.
To summarise this first migration from M1 to M2
– Completely redesigned our frontend Theme, basing it on a good commercial theme and implementing our own customisations.
– Bought 5 or 6 commercial extensions that we either already used in Magento 1 or needed for our business requirements.
– Rewrote our Magento 1 modules for Magento 2 which included page/product customisations, SEO, product sliders and galleries and custom cart features.
– The migration took 11 months from ZERO to go live.
Of course the middle of March 2020 with most countries beginning to implement lockdown rules because of the Corona Pandemic was possibly THE worst time to launch a new ecommerce site. In the week we launched most of our customer base was forced to close and our company has been severely affected by the loss of revenue. With that in mind I don’t really feel it’s the time to celebrate our Magento 2 migration. I will leave that for another day…
Magento 2 PageSpeed (Lighthouse) performance audit results for mobile and desktop are notoriously bad. Imagine you have worked for months on a new Magento 2 eCommerce store, followed best practices for setup and optimisation, the store seems to be running fine but the first time you run a Lighthouse report you see a performance score like this:There are a lot of factors that can affect the Lighthouse performance results for any website but for Magento 2 a big performance killer is the sheer amount of external resources required to render a page whether it be a product page, cms page or category page. Some of these render blocking resources such as Javascript or CSS can cause significant delays in page loading and affect performance. You will see this type of performance problem identified in Lighthouse as “Eliminate render-blocking resources”.
Magento 2 uses the RequireJs javascript module system to load Javascript source code required for each Magento 2 page. If you have a lot of custom features with modules implementing additional Magento 2 Javascript mixins the number of Javascript resources in addition to the core javascript code required by Magento will increase and adversely affect page loading performance. As an example, here is the network console log from a really simple product page from my development site, you can see that there are 194 requests for Javascript resources!
There are various ways to try and reduce the performance impact of loading lots of Javascript including using http2 which is great at handling small file requests quickly or minifying the Javascript source to reduce it’s size but the most effective way of optimising Javascript loading is to use bundling.
Javascript bundling is a technique that combines or bundles multiple files in order to reduce the number of HTTP requests that are required to load a page.
Magento 2 has a built in javascript bundler that is extremely ineffective! Users report it creating a huge multi megabyte javascript file that decreases performance instead of improving it. You will actually see the recommendation not to use the built in Magento 2 bundling referenced in Lighthouse reports – “Disable Magento’s built-in JavaScript bundling and minification, and consider using baler instead.”
Baler mentioned here is an AMD (Asynchronous Module Definition) module bundler / preloader for Magento 2 stores. You will find a lot of Magento 2 js bundling guides that recommend using Baler but for the average developer (like me) or Magento 2 merchant the bundling process with Baler can be quite complex and daunting. There is however a new Magento 2 js bundler available that is much easier to use.
MageSuite Magepack
The Magepack from MageSuite is a “Next generation Magento 2 advanced JavaScript bundler” it’s pretty easy to implement and as of version 2.0 the results it achieves are very impressive.
Up to 91 points mobile score in Google Lighthouse.
Up to 98% reduction in JavaScript file requests.
Up to 44% reduction in transferred JavaScript size.
Up to 75% reduction in total load time.
Works with Magento’s JavaScript minification and merging enabled.
Uses custom solution (inspired by Baler)
I installed Magepack on my Magento 2 development site in May 2020 and achieved a 100 desktop performance score with PageSpeed –
This is a simple product page, using the default Luma theme and I am also using Nginx as a container proxy running the PageSpeed module, so you probably won’t achieve this kind of result on a real world product page but you will see a huge improvement. Check the results yourself here.
Let’s look at how to setup and install MagePack for Magento 2.3.x.
Setup and install MagePack for Magento 2.3.x
MagePack consists of a NodeJS bundler app and a Magento 2 module. The bundler app runs on Node JS v10 or higher. I’m running MagePack in my Docker Magento 2 php container, it’s running Ubuntu server 18.04. To install Node JS simply run
Finally to install the NodeJS MagePack app itself run npm install -g magepack --unsafe-perm=true --allow-root
Installing Magepack NodeJS app and dependencies
You will see that Magepack pulls down Chromium – it needs a web browser to analyse your Magento 2 site, most of the dependencies installed earlier are required for Chromium.
Next, depending on the version of Magento 2 you are running you might need to install some patches. If you are running 2.3.4 or greater you can skip the next part. For Magento 2.3.3 and earlier three patches are required, and the most painless way of patching Magento 2 is to use Cweagans/Composer-Patches
composer require cweagans/composer-patches
You will find all the patches you need here : https://github.com/integer-net/magento2-requirejs-bundling
In your Magento 2 installation folder create a patches folder copy the patches into it and edit your Magento 2 composer.json file to include the following composer extra patches config.
composer extra patches config
"extra": {
"magento-force": "override",
"composer-exit-on-patch-failure": true,
"patches": {
"magento/magento2-base": {
"[Performance] Fix missing shims and phtml files with mage-init directives (https://github.com/magento/magento2/commit/db43c11c6830465b764ede32abb7262258e5f574)": "patches/composer/M233/github-pr-4721-base.diff",
"Refactor JavaScript mixins module https://github.com/magento/magento2/pull/25587": "patches/composer/M233/github-pr-25587-base.diff"
},
"magento/module-braintree": {
"[Performance] Fix missing shims and phtml files with mage-init directives (https://github.com/magento/magento2/commit/db43c11c6830465b764ede32abb7262258e5f574)": "patches/composer/M233/github-pr-4721-braintree.diff"
},
"magento/module-catalog": {
"[Performance] Fix missing shims and phtml files with mage-init directives (https://github.com/magento/magento2/commit/db43c11c6830465b764ede32abb7262258e5f574)": "patches/composer/M233/github-pr-4721-catalog.diff"
},
"magento/module-customer": {
"[Performance] Fix missing shims and phtml files with mage-init directives (https://github.com/magento/magento2/commit/db43c11c6830465b764ede32abb7262258e5f574)": "patches/composer/M233/github-pr-4721-customer.diff"
},
"magento/module-msrp": {
"[Performance] Fix missing shims and phtml files with mage-init directives (https://github.com/magento/magento2/commit/db43c11c6830465b764ede32abb7262258e5f574)": "patches/composer/M233/github-pr-4721-msrp.diff"
},
"magento/module-paypal": {
"[Performance] Fix missing shims and phtml files with mage-init directives (https://github.com/magento/magento2/commit/db43c11c6830465b764ede32abb7262258e5f574)": "patches/composer/M233/github-pr-4721-paypal.diff"
},
"magento/module-theme": {
"[Performance] Fix missing shims and phtml files with mage-init directives (https://github.com/magento/magento2/commit/db43c11c6830465b764ede32abb7262258e5f574)": "patches/composer/M233/github-pr-4721-theme.diff",
"fix_baler_jquery_cookie": "https://gist.github.com/tdgroot/f95c398c565d9bbb83e0a650cdf67617/raw/69ee2d001ff509d25d1875743e417d914e20fd85/fix_baler_jquery_cookie.patch"
}
}
}
Now run composer update Magento 2 will be patched and we are good to go.
Let’s get ready to bundle
Magepack needs to analyse pages from your Magento 2 store to determine the Javascript files your store is using and how they can be bundled. It saves this information in a configuration file called magepack.config.js. The magepack config file is generated by analysing three different type of pages from your Magento 2 store, a cms page i.e. the home page, a category page and a product page. This is done using the magepack generate command and supplying three store urls.
Run this command in the root folder of your Magento 2 installation to create the magepack.config.js file. It’s worth noting that you could run this generate command from any system, and just copy the generated config file to your Magento 2 server.
If you take a look at magepack.config.js you will see it contains references to all the javascript required to load Magento pages. Below is an example from a product page.
All that remains now is for us to create the bundle files and deploy them for all our store views and themes. This is simply done with the magepack bundle command which you can execute from the Magento installation root folder.
magepack bundle
Magepack bundle command
Finally enable Magepack Javascript bundling in admin :
Note that you should also enable the other Javascript optimisation options here including minfy javascript files and move js code to the bottom of the page – but don’t enable the default bundling!
MagePack Javascript bundling should now be enabled. To check it’s working go to a Magento 2 product page and look at the source code, do a search for “bundle” and you should see the magepack javascript bundles
Now refresh the page and have a look at your network log
After bundling there are only 7 js requests on the product page
Instead of loading 194 Javascript files, the product page now loads 7, Magepack has bundled all the Javascript into two main bundle files.
I guess it’s now time to look at the PageSpeed Lighthouse performance reports for your optimised Magento 2 pages. If you are using the Chrome browser simply run a Lighthouse report from the DevTools page. You can also use Googles PageSpeed insights tool at https://developers.google.com/speed/pagespeed/insights/
This is the improvement I saw in a live production Magento 2 site
Production Magento 2 product page before and after bundling
If you don’t see a big improvement remember there are a lot of other factors taken into Lighthouse performance reports. Work through the report and try to find out where you can make further improvements.
Deployment in production
Whenever you flush your sites static files you will need to remember to run mage bundle again. In production mode you should add this to your deployment process
You should test your store thoroughly to make sure there are no Javascript problems caused by the bundling process. Magepack cannot always 100% bundle all the Javascript required by some pages. Check your web browser console for errors. If you find some features of your store are not working, try and identify if the code was included in the magepack.config.js file. Try removing the code from the bundle and test again.
Magepack is pretty new with updates being made regularly, be sure to check out the projects GitHub page for new issues.
I was working on my procedures for applying updates to a production Magento 2 site recently and decided it was a pretty good idea to put Magento into maintenance mode first before making any changes or updates that might temporarily break the site and return a nasty error message. The default production maintenance page for Magento 2 looks like this
Default Magento 2 Production Maintenance Page
It’s not exactly what I would call a thing of beauty. A Google search reveals a plethora of solutions – but I really wanted something simple. In my mind a custom module with a thousand customisation options for a maintenance page is somewhat overkill. You can also create your own custom response by editing or extending the 503.phtml file in pub/errors/default.
503 Service Unavailable
Notice that in production mode the maintenance page returns a 503 error which is correct as we want any visitors (and crawlers) to know that the site is temporarily unavailable. (In development mode this is a much more unfriendlier http 500 error!)
There is however a problem associated with your Magento site returning a 503 error in maintenance mode. If you are using Varnish, and especially if you are using the health probe in varnish the 503 error will cause varnish to eventually announce the server as sick and throw it’s own extremely unfriendly error – something about meditating gurus.
If you look at the Magento docs they actually suggest creating a custom maintenance page via the web server – Apache or NginX. The examples show a configuration whereby the web server redirects to a custom url when a maintenance file is present on the system.
server {
listen 80;
set $MAGE_ROOT /var/www/html/magento2;
set $maintenance off;
if (-f $MAGE_ROOT/maintenance.enable) {
set $maintenance on;
}
if ($remote_addr ~ (192.0.2.110|192.0.2.115)) {
set $maintenance off;
}
if ($maintenance = on) {
return 503;
}
location /maintenance {
}
error_page 503 @maintenance;
location @maintenance {
root $MAGE_ROOT;
rewrite ^(.*)$ /maintenance.html break;
}
include /var/www/html/magento2/nginx.conf;
}
Here they are suggesting that if the file maintenance.enable is present NginX will 503 redirect to a custom maintenance file. A similar config example is available for Apache.
This also works quite well and if you change the file detection to the Magento 2 system generated maintenance file /var/.maintenance.flag As soon as you place Magento into maintenance mode the custom page would be shown – cool!
But there are still a couple of drawbacks, first with your site returning 503 for all pages your maintenance page can’t load any external js or css hosted on your Magento server so your maintenance page needs to be pretty basic. Second you are still returning a 503 to Varnish which will eventually cause a health error.
Chances are if you are using Varnish you also have an NginX reverse proxy in front of Varnish providing TLS encryption. Or if you are using Docker, NginX is reverse proxying http/s to your containers. If so then this is best place to configure your custom maintenance page and you can create a really nice looking dynamic Magento custom maintenance page that will appear as soon as you place Magento into maintenance mode – or whenever Magento or Varnish return 503 errors.
For Docker you will need to mount a volume on NginX giving it access to the var/ folder in Magento so that it can detect the .maintenance.flag file.
The NginX config looks like this
# MAGENTO 2 Maintenance Mode
set $MAGE2_ROOT /var/www/gaiterjones/magento2/;
set $maintenance off;
if (-f $MAGE2_ROOT/.maintenance.flag) {
set $maintenance on;
}
if ($maintenance = on) {
return 503;
}
error_page 503 @maintenance;
location @maintenance {
root /var/www/html;
rewrite ^(.*)$ /magento2-maintenance.html break;
}
Here you can see the Magento var folder is mounted to var/www/gaiterjones/magento2 in NginX and if the maintenance file exists we redirect to a local maintenance.html page in var/www/html
The custom maintenance.html page can be any kind of page you want. As soon as you do a bin/magento maintenance:enable NginX will show the maintenance page returning a 503 code to any visiting customers (or search engines). My page refreshes every 30 seconds so as soon as you do bin/magento maintenance:disable customers will automatically see your shop again (hopefully).
Configurable products have changed a lot in Magento 2. Compared to Magento 1 configurable products in Magento 2 are just containers for simple products (variations). You can no longer configure pricing data directly in the configurable product as the parent configurable inherits it’s all it’s unit, tier price and inventory data from the child simple products. It’s now much easier to create products with variations such as size, colour etc. and to display them in various different ways including visual swatches.
Large Configurable Products are slow to Load!
One of the downsides to Magento 2 configurable products is that the large configurable products can become slow to load in the frontend when a lot of variations are configured. A configurable product with a lot of options can quickly become very LARGE. I worked on a store with a product that is available in over 250 colours and four sizes. This results in a configurable product with over a 1000 child products and whilst theoretically there is no limit to the amount of simple products in a configurable container product in practice the way Magento 2 builds the frontend product can lead to very slow load times.
In the frontend, Magento 2 loads all variations in a giant JSON object and renders that into the DOM. This JSON object is 20 megabytes for 10,000 variations. In the backend, this JSON is also built and passed to a UI component wrapped in XML. PHP’s xmllib is not able to append extremely large XML structures to an existing XML structure.
Even with 1000 variations page load time for an uncached configurable product was in excess of 30 seconds.
Elgentos LCP
Fortunately the nice people at Elgento open sourced a module they had developed for a customer experiencing exactly this problem with slow loading large Magento 2 configurable products. elgentos/LargeConfigProducts greatly improves the loading time of large configurable products by pre-caching the product variation data in the backend and loading the frontend variation JSON as an asynchronous ajax request. This results in a much faster load time of the parent product and the cached json variation data.
When I found the module there were some issues with Magento 2.3.x compatibility which the developer had not had time to correct. I made some changes to the module to make it compatible and also added AQMP/RabbitMQ integration and am now using it in product without any issues.
“Prewarming” is the process of creating and caching the variation data for configurable products. The module uses Redis to cache the data and you should specify your redis host, TCP port and choose a new database for the data.
The module includes a new indexer that will prewarm all configurable products when you manually reindex with bin/magento index:reindex
With the module configured and enabled all configurable products now load variation data via an ajax request. If a product has not been prewarmed by an index upon first frontend load the variation data will be created and cached. You can also manually create the variation data using a console command
bin/magento lcp:prewarm --products 1234 -force
This will force a prewarm of variation data for a configurable product with the id 1234.
When you make a change to a configurable product, or a child of a simple product the module uses a message queue to update the configurable product cached data. Magento 2 has built in AQMP/RabbitMQ integration and you can configure this using the following env.php configuration :
Messages are created by a publisher and actioned by a consumer. To list all the configured Magento 2 consumer queues use:
bin/magento queue:consumers:list
You will see that elgentos_magento_lcp_product_prewarm is listed. To run the prewarm consumer use bin/magento queue:consumers:start elgentos_magento_lcp_product_prewarm this will start processing all messages generated by the module and updating the variation cache for any products that have been changed.
You should ensure that your consumer process is always running. If you use Docker you can create a small consumer container for this purpose.
I can also recommend using the RabbitMQ Docker container image: rabbitmq:management the build in management gui is useful for monitoring message data here you can see the lcp message generation for the prewarm consumer after performing a reindex
RabbitMQ Management Gui
In my opinion this functionality should be built into Magento by default to improve the loading time of large configurable products. Changes are coming to configurable products in Magento 2.4 so perhaps there will be improvements made in this area.
Many thanks to Elgentos and Peter Jaap Blaakmeer for making this module freely available to the community and allowing me to contribute to it.
Migrating the Magento 2 catalog search engine to Smile ElasticSuite will resolve pretty much all the issues you might be experiencing with Magento 2 native ElasticSearch catalog search so go ahead and Jump to the ElasticSuite installation.
If you are new to ElasticSearch or want to find out how to customise ElasticSearch in Magento 2 read on!
Magento Catalog Search
Up until version 2.3 the default catalog search engine for Magento used the MySql Magento database. Using MySql for search was adequate but it lacked the features and scalability of enterprise search solutions. In version 2.3 Magento built in support for ElasticSearch as the catalog search engine and announced in 2019 that MySql search would be deprecated. As of version 2.4 in July 2020 the MySql catalog search engine was removed completely from Magento 2.
MySql catalog search deprecation notice
Native support for ElasticSearch was good news for Merchants, Elasticsearch is a java based open-source, RESTful, distributed search and analytics engine built on Apache Lucene. Since its release in 2010, Elasticsearch has quickly become the most popular search engine.
It’s worth mentioning that ElasticSearch in Magento 2 is not just used for user full text search queries, the catalog search engine is responsible for returning all catalog queries including category products and filtered navigation product lists.
For customers ElasticSearch should provide faster and more relevant search experiences – the problem for merchants is that out of the box Magento 2 ElasticSearch just doesn’t do this – catalog search results and relevance have a tendency to be extremely poor. The built in search struggles to provide accurate and relevant results for simple queries such as SKUs.
Whilst MySql catalog search had some admin options to refine search results, there are no options available to customise catalog search with ElasticSearch. ElasticSearch is a great search engine but the native Magento 2 catalog full text search implementation is very disappointing.
Let’s look at ways to customise ElasticSearch catalog search in Magento using your own module to improve some areas of search relevance.
Simple SKU Search
Poor search results or search relevance with native Magento ElasticSearch is very apparent when searching for SKUs. Let’s look at a simple SKU search for one of the sample products provided in the Magento 2 sample data.
SKU Search for Magento 2 sample product
Article MH03 is a range of Hoodies. Searching for ‘MH03’ correctly returns all 16 products. But what if you want to search for MH03-XL?
Refined SKU Search for Magento 2 sample data
Here we see that 112 items are returned when in fact only the first 3 were 100% matches for the search term. Native search really struggles with search terms containing special characters such as the hyphen commonly used in SKUs resulting in extremely poor search results. To look at why we are seeing so many results returned we need to look at the relevance score of the search results.
Customise Elastic Search
To capture the data returned by an ElasticSearch frontend full text search query we need to create a debug plugin for Magento\Elasticsearch\SearchAdapter\ResponseFactory that will let us analyse the search data and log it.
The plugin dumps the search data to the debug log in var/log, this allows us to look more closely at the ElasticSearch results for the MH03-XL SKU full text search :
The debug shows the ElasticSearch rawdocument score for the 112 search results, you can see that the score value for the search ranges from 7.9 to 40.5 with the most relevant results having a higher score. If we were to define a minimum relevance score of 40 the search results would be much more accurate.
aroundBuildQuery plugin for SearchAdapterMapperPlugin
public function aroundBuildQuery(
Mapper $subject,
callable $proceed,
RequestInterface $request
) {
$searchQuery = $proceed($request);
if ($request->getName() === 'quick_search_container') {
$searchQuery['body']['min_score'] = $this->configuration->getMinScore();
}
return $searchQuery;
}
Here we set a min_score value for the search query. Setting this to 40 would return just three results for the MH03-XL SKU search.
SKU Search for Magento 2 sample products with min_score value
This looks much better, we can improve the relevance of the search results by filtering out results that have a low ElasticSearch score. The tricky part here is deciding what the minimum score value should be – it can be difficult to find a value that works well for different search queries.
Another useful ElasticSearch customisation is changing the ngram values when indexing the catalog. The ngram tokenizer helps break search terms up into smaller words. We can change the ngram value with another plugin
The module adds a configuration value to Stores -> Configuration -> Catalog -> Search where you can set the minimum score value for Elastic Search results.
Configure search minimum score value in admin
It’s a real shame that ElasticSearch customisation options such as these are not built into Magento 2 by default to help Merchants improve the search experience. ElasticSearch is new to me, and will be to a lot of merchants and Magento devs upgrading to Magento 2.4. It’s a very complex system to understand and although we can tweak some values as shown to improve results this is not a great solution to the problem.
If like me you are still not happy with the native Magento 2 ElasticSearch catalog search results the absolute best solution I have found is to migrate to Smile ElasticSuite
Simply put, installing the Smile ElasticSuite modules and changing catalog search to ElasticSuite will immediately give you almost perfect search results. ElasticSuite is very simple to install and works out of the box improving search results and search relevance.
Install Smile ElasticSuite
Here are the steps required to install ElasticSuite.
Note that ElasticSuite includes it’s own custom layered navigation, if you are already using third party layered navigation modules you will need to disable these first before installing elasticsuite.
You will need to choose the correct ElasticSuite version for the version of Magento 2 you are using. Here are the options for Magento 2.3.x and 2.4
To change your catalog search engine to ElasticSuite navigate to Stores -> Configuration -> Catalog -> Search and select ElasticSuite as the new catalog search engine.
Configure ElasticSuite as Catalog Search Engine
Refresh caches and the ElasticSuite catalog search engine should now be setup and working – congratulations – Magento 2 full text catalog search just got a whole lot better!
if you see the following error in the frontend, simply run the indexer again.
Exception #0 (LogicException): catalog_product index does not exist yet. Make sure everything is reindexed.
ElasticSuite has some great features :
A new search input form with automatic and relevant search suggestions in a dropdown list
Layered navigation with price sliders and advanced filters
Automatic redirect to product when search returns a single matching product
Automatic spell checked search terms
Smart virtual product categories
Search analysis in admin
You will notice straight away when searching for SKUs that ElasticSuite returns more relevant results than native search.
ElasticSuite sample product SKU search
Using the SKU search example you can search for all or part of the SKU with or without the hyphen and accurate search results will be returned. Notice below the search for MH03 XL without the hyphen returns the correct results
ElasticSuite sample product SKU search
The redirect to product feature when a single matching product is found in search is really useful taking the customer directly to the relevant product.
The search analysis in admin is a great feature allowing you to see how search is being utilised by your customers and which search terms lead to conversions.
ElasticSuite search analysis
For more information on ElasticSuite features and configuration consult the ElasticSuite Wiki or visit the website.
Many thanks to the Smile team for making this module freely available to the Magento 2 community.
I spent many hours recently trying to figure out why a custom Magento 2 customer registration attribute was not working only to find that a relatively simple mistake was the culprit.
The attribute appeared to be created correctly in the eav_attribute database table, but the frontend value was not being saved to the database when the customer registered.
The attribute was a checkbox, which has a boolean true/false value. As I set the checkbox to be checked by default, my mistake was simply not to set a default value which meant that no value for the custom attribute was being passed via the registration form to the backend customer registration / creation process.
In case anyone else is trying to create a custom checkbox (boolean) attribute and experiencing the same mind numbingly annoying problem here is a module with a demonstration of two custom Magento 2 attributes – checkbox and text input that demonstrates the correct working code.
The Magento Message Queue Framework was made available in the Open Source version of Magento with version 2.3 and provides a standards based system for modules to publish messages to queues and also defines the consumers that will receive the messages asynchronously.
What are Asynchronous Message Queues?
The normal way for a module to process data caused by some kind of front or backend action is to create an observer that listens for the specific Magento event and then processes the event data in some way. For example if you want to process order data whenever an order is placed, you would create a custom module with an order event observer. When a customer places an order the event will be fired and your module can process the order data.
This event type processing occurs synchronously with the event. This can be a problem if your process is intensive and takes some time to complete, or if for some reason it causes an error. Synchronous processing of events may cause delays for the customer in frontend processes.
Asynchronous Message Queues allow your event to be placed in a queuing system and processed by your modules consumer as soon as possible.
The Magento Message Queue Framework consists of the following components:
Publisher
A publisher is a component that sends messages to an exchange.
Exchange
An exchange receives messages from publishers and sends them to queues.
Queue
A queue is a buffer that stores messages.
Consumer
A consumer receives messages. It knows which queue to consume.
By default Magento 2 uses the MySQL database as the Exchange and Queue system for messages. It does this with a MySQL adapter, Message data is stored in the queue, queue_message, queue_message_status tables. Magento uses the cron job consumers_runner to manage queued messages by starting (or restarting) message consumers.
The fastest this can happen with the Magento cron system is once every 60 seconds. The job is configured in the MessageQueue module.
Using the Magento database for messaging is not very scalable
RabbitMQ should be used whenever possible.
Converting Magento 2 Message Queues to Rabbit MQ AMQP
RabbitMQ is an open source message broker system and the Magento Message Queue Framework has built in support for RabbitMQ as a scalable platform for sending and receiving messages. RabbitMQ is based on the Advanced Message Queuing Protocol (AMQP) 0.9.1 specification.
Configuring Magento 2 to use RabbitMQ requires the addition of a new queue node in app/etc/env.php
Enabling message queues for RabbitMQ AMQP is simply a case of changing the configuration of the queue from DB to AMQP.
connection="amqp"
One built in message queue is by default configured to use AMQP this is the async.operations.all queue. If you have successfully configured RabbitMQ you should see this queue in the admin interface
Default AMQP Message Queue
When creating message queues for new modules you should configure them to use AMQP, check out this module for an example module using a product save event message.
If you want to convert existing MySQL DB message queues to use AMQP you can accomplish this using extra nodes in the env.php queue configuration. This example is taken from the official documentation for product_action_attribute.update queue:
In theory you can try and convert all Magento 2 MySQL message queues to AMQP RabbitMQ message queues. If you do this you will also probably want to create your own external consumers to process RabbitMQ message queues more efficiently.
Keep in mind that if you create external message queue consumers you should ensure that the consumer processes are always running. There are a few ways to accomplish this for example by using a supervisord process control system. If you are working with a Docker environment I recommend creating one or more consumer containers to manage Magento 2 AMQP messages.
You can configure the default Magento 2 cron_runners_runner cron job via env.php settings
Note that if you set cron_run here to false you are completely disabling the default consumer cron job in Magento. If you are using external consumers for some or all message queues think carefully before completely disabling this cron job.
Note – during testing I was unable to convert some existing Magento 2.4.2 MySQL queues to AMQP
Docker Consumers and Message Management
I like to use dedicated Docker containers to manage message queue consumer processes in my Magento 2 Docker environment. In theory a container should manage a single consumer, in practice it can be easier to run multiple consumers in one container. The container needs to know which consumers to start, and also needs to monitor consumer processes to ensure that they remain running.
To manage this process and make it easier to convert multiple message queues from MySQL db to AMQP I created a Message Manager module.
This module allows you to convert some or all queues to AMQP, automatically updating the env.php settings to configure the changes required for each module. The module also feeds data to my Docker container telling the container which consumer processes it should manage.
After installing the module you can display all existing message consumers with:
bin/magento messagemanager:getconsumers
This will return an array of consumers showing the queue name and the current connection method – db for MySQL, – amqp for RabbitMQ AMQP.
To convert specific queues to AMQP you can define a whitelist in Gaiterjones\MessageManager\Helper\Data
public function whitelist(): array
{
return array(
'gaiterjones_message_manager',
'product_action_attribute.update'
);
}
Here I am specifying two queues that I want to use with RabbitMQ AMQP. I know gaiterjones_message_manager is already configured for AMQP and I want to convert product_action_attribute.update from MySQL to AMQP.
To do this I can use the getconfig command. First check your current AMQP config with bin/magento messagemanager:getconfig ensure that your RabbitMQ server is configured.
Make sure you backupenv.php before running this command!
After changing the config run setup:upgrade to reconfigure the module queues. If the queue config creates an error restore your env.php settings and run setup:upgrade again to restore the configuration.
Now you should see the converted queues in RabbitMQ admin.
MySQL queues converted to AMQP and visible in RabbitMQ
The gaiterjones_message_manager queue is installed with the module, to test RabbitMQ is working use the test queue command bin/magento messagemanager:testqueue
You should see a new message created in RabbitMQ.
RabbitMQ Message Queue Test
Summary
It is very likely that future versions of Magento 2 will stop using MySQL as a messaging broker. In the same way that Search was moved to Elasticsearch we may have to use RabbitMQ in the future for all Magento async messaging.
Message queues are cool! If you are building production Magento 2 sites in a Docker environment it makes sense to start using RabbitMQ and external consumers now.
More Reading
In conclusion I recommend reading through the official documentation for more information on this subject.
Varnish and Magento 2 go together like Strawberries and Cream – you just cannot have one without the other.
Recently I got really confused about the correct way to configure Varnish for Magento 2, so for me and anyone else confused about the configuration here is the definitive guide to configuring Varnish for Magento 2.
The Definitive Guide to Configuring Varnish for Magento 2
To configure Varnish you need to know:
Varnish server name
Varnish listener TCP port – defaults to 6081
Magento content server name
Magento content server TCP port
If you are working with a single host the server name for Varnish and Magento will be localhost (127.0.0.1), if you are working in a Docker environment the server name for Varnish and Magento will be the container names of the Varnish and Magento web/content server services.
There are two areas in Magento where you must configure Varnish settings:
Magento Core Config : app/etc/env.php
Magento admin Stores -> Configuration -> System -> Full Page Cache
Core Config
The core config for Varnish in app/etc/env.php looks something like this:
You should see something like the following where magento2_php-apache_1is the hostname of your Magento 2 content server and backend_port is the tcp port of the content (Magento) server
By default Varnish is configured to listen for incoming external client http requests on TCP 6081.
The backend_port configured in admin is only used for the vcl config generation.
The env.php http_cache_hosts port is the port used to communicate with varnish.
To confirm your Varnish cache is working examine the headers returned by your Varnish server when browsing Magento frontend pages. You can also inspect the headers using curl
Here we can see the X-Magento-Cache-Debug header showing a cache hit. Note – this header will be disabled in production mode.
Remember that Varnish has no support for TLS connections over HTTPS. To use an encrypted TLS connection to Magento 2 with Varnish FPC you need to use a frontend proxy such as NGINX.
A few years ago the nice people at Amazon started to offer unlimited cloud storage for Amazon Prime customers on their so called Amazon Drive service. A lot of people, including me, jumped on board – I mean it was unlimited storage – with emphasis on the UNLIMITED.
This was great – for about a year, lots of people were boasting about the many, many, many Terabytes of data they had stored on Amazon Drive. For me it was a perfect solution to my cloud backups. I subscribed to ODrive and synced all my working data to Amazon Drive.
Then one day Amazon announced that Amazon Drive for Prime customers would no longer have unlimited storage, and would no longer be free. In fact 1TB of storage would cost $99.99 per year. They had dangled a nice free juicy worm in front of me, I had bitten and now I was well and truly hooked – hook line and sinker.
What choice did I have? All my data was already stored on Amazon Drive so I swore a lot, got out my credit card and paid up.
Another year or so later (2019) it’s time to pay Amazon another 100 bucks and I notice that Amazon Drive is now called Amazon Photos, without really informing their customers Amazon have rebranded it into a photo and video storage service. Not exactly what I expected from a cloud storage solution.
As soon as I discovered Google Drive offering twice the amount of storage (2TB) for the same price I decided it was time to migrate my ODrive from Amazon Drive to Google Drive. The problem is, with almost 1TB of data in the cloud – even with reasonably fast home DSL internet bandwidth how do you move so much data around? To download it all, and then upload it all would take days – and I don’t even have enough disk space on my shiny silver macbook pro to temporarily store it all (the reason why I moved to cloud storage in the first place.)
Here is the solution – How To Migrate your Terabytes in Hours not Days
Spin up a bare metal Windows server
I used a Win2k16 Server on a Packet c1.small.x86c1.small.x86 with super duper gigabit bandwidth.
Install the AmazonPhotos app.
Using the Amazon app restore your files to a local folder.
Login to Google and upload the restored files to Google Drive.
In Odrive unsync Amazon Drive synced folders and resync them to Google Drive
FIN.
I moved all of my data in a few hours, and it cost about $30.
Goodbye Amazon, please don’t try and trap me into a shitty service again.
I wasn’t able to use the Magento 2 migration tool for products so one of the tasks on my TO DO list was product review migration. Looking around for (free) solutions the only stuff I came across was no longer free or didn’t work. $100 to import 500 reviews – I don’t think so.
If you are looking to do something similar here is a quick copy and paste of the php code I used for the Magento 1 export, and the Magento 2 import.
I am assuming you can plug this into your existing Magento php cli tools which already bootstrap the corresponding Magento1 or 2 core classes. For Magento 1 I first took a full product collection and parsed each product for review data. All the data is saved to a tab seperated CSV file which is then used for the Magento 2 product review import.
WEB3 is becoming the buzzword of the year – if you don’t know exactly what it is join the club! Suffice to say that WEB3 will decentralise the Internet enabling us to do all the stuff we love on the Internet with a whole load of new decentralised blockchain applications (dapps).
If you have already used any dapps you will be familiar with crypto wallet apps such as MetaMask that allow you to access your accounts on various blockchains. I was interested in how these apps connect to MetaMask and came across a github project using MetaMask as a passwordless user authentication system for web apps and thought this would be a great way to do user logins for some of my applications.
The demo below shows how to connect with a web3 wallet and then authenticate the user to a PHP application by signing a message within the wallet to validate the account.
How it works
Let’s look at the workflow used by the client javascript and server php code.
First on the client we try and initialise a web3 provider and connect to a wallet with web3Connect(). This will fetch and initialise account information variables and update the gui. If no provider is found we launch a new Web3Modal window which is a single Web3 / Ethereum provider solution for all Wallets.
With the wallet connected and the account public address identified we can offer a user login using the public address as the unique identifier. First we call web3Login() which initiates the backend login process. We are using the axios plugin to post the login data to the backend php script which queries an sql database to check if the user account exists / creates a new account. The backend generates an Cryptographic nonce which is passed back as a sign request to the client. The client requests that the message be signed in the wallet and the signed response is sent back to the server to be authenticated.
We now have the server generated message, the same message signed by the user and the users public address. The backend performs some cryptographic magic in order to determine if the original message was signed with the same private key to which the public address belongs. The public address also works as a username to identify the account. If the signed message and public address belong to the same private key, it means that the user who is trying to log in is also the owner of the account.
After authentication the backend creates a JSON Web Token (JWT) to authenticate further user requests. PHP Session data is created by the backend which allows the authentication to persist between visits, with the backend using the JWT token to authenticate the user with each page request. The user is now logged in and the client updates the frontend gui accordingly.
To complete the demo a logout button is included to log the user out. In this demo anyone can create a new user account and “login”. In practice to restrict user access the backend would have a user approval process to enable new accounts, additionally user groups can be created to apply permissions to the account which are then used by the backend pages to determine which content is available to the user.
Client side js
"use strict";
/**
*
* WEB3 Application Wallet Connect / Passwordless client Login
*
*
*/
// Unpkg imports
const Web3Modal = window.Web3Modal.default;
const WalletConnectProvider = window.WalletConnectProvider.default;
//const Fortmatic = window.Fortmatic;
const evmChains = window.evmChains;
// Web3modal instance
let web3Modal
// provider instance placeholder
let provider=false;
// Address of the selected account
let selectedAccount;
// web3 instance placeholder
let web3=false;
/**
* ready to rumble
*/
jQuery(document).ready(function(){
// initialise
init();
});
/**
* Setup the orchestra
*/
async function init() {
// gui button events
//
$('#web3connect').on('click', function () {
web3Connect();
});
$('#web3login').on('click', function () {
web3Login();
});
$('#web3logout').on('click', function () {
web3Logout();
});
//$('#web3disconnect').on('click', function () {
// web3Disconnect();
//});
console.debug("Initialising Web3...");
console.log("WalletConnectProvider is", WalletConnectProvider);
//console.log("Fortmatic is", Fortmatic);
console.log("window.web3 is", window.web3, "window.ethereum is", window.ethereum);
console.debug('' + web3App.loginstate);
// Check that the web page is run in a secure context,
// as otherwise MetaMask won't be available
if(location.protocol !== 'https:') {
// https://ethereum.stackexchange.com/a/62217/620
console.debug('HTTPS not available, Doh!');
return;
}
// Tell Web3modal what providers we have available.
// Built-in web browser provider (only one can exist as a time)
// like MetaMask, Brave or Opera is added automatically by Web3modal
const providerOptions = {
walletconnect: {
package: WalletConnectProvider,
options: {
// test key
infuraId: "8043bb2cf99347b1bfadfb233c5325c0",
}
},
//fortmatic: {
// package: Fortmatic,
// options: {
// TESTNET api key
// key: "pk_test_391E26A3B43A3350"
// }
//}
};
// https://github.com/Web3Modal/web3modal
//
web3Modal = new Web3Modal({
cacheProvider: true,
providerOptions
});
console.log("Web3Modal instance is", web3Modal);
if (web3Modal.cachedProvider) {
console.debug('Cached Provider found');
web3Connect();
initPayButton();
}
}
/**
* Fetch account data for UI when
* - User switches accounts in wallet
* - User switches networks in wallet
* - User connects wallet initially
*/
async function refreshAccountData() {
// Disable button while UI is loading.
// fetchAccountData() will take a while as it communicates
// with Ethereum node via JSON-RPC and loads chain data
// over an API call.
updateGuiButton('web3connect','CONNECTING',true);
await fetchAccountData(provider);
}
/**
* Get account data
*/
async function fetchAccountData() {
// init Web3 instance for the wallet
if (!web3) {web3 = new Web3(provider);}
console.log("Web3 instance is", web3);
// Get connected chain id from Ethereum node
const chainId = await web3.eth.getChainId();
// Load chain information over an HTTP API
let chainName='Unknown';
try {
const chainData = evmChains.getChain(chainId);
chainName=chainData.name;
} catch {
// error...
}
console.debug('Connected to network : ' + chainName + ' [' + chainId + ']');
// Get list of accounts of the connected wallet
const accounts = await web3.eth.getAccounts();
// MetaMask does not give you all accounts, only the selected account
console.log("Got accounts", accounts);
selectedAccount = accounts[0];
web3.eth.defaultAccount = selectedAccount;
console.debug('Selected account : ' + selectedAccount);
// Go through all accounts and get their ETH balance
const rowResolvers = accounts.map(async (address) => {
web3App.ethAddress = address;
const balance = await web3.eth.getBalance(address);
// ethBalance is a BigNumber instance
// https://github.com/indutny/bn.js/
const ethBalance = web3.utils.fromWei(balance, "ether");
const humanFriendlyBalance = parseFloat(ethBalance).toFixed(4);
console.debug('Wallet balance : ' + humanFriendlyBalance);
});
// Because rendering account does its own RPC commucation
// with Ethereum node, we do not want to display any results
// until data for all accounts is loaded
console.debug ('Waiting for account data...');
await Promise.all(rowResolvers);
// Update GUI - wallet connected
//
updateGuiButton('web3connect','CONNECTED',true);
if (web3App.loginstate=='loggedOut')
{
updateGuiButton('web3login',false,false);
} else {
updateGuiButton('web3login',false,true);
}
console.debug ('Wallet connected!');
}
/**
* Connect wallet
* when button clicked
* or auto if walletConnect cookie set
*/
async function web3Connect() {
try {
// if no provider detected use web3 modal popup
//
if (!provider)
{
console.log("connecting to provider...", web3Modal);
console.debug("connecting to provider...");
provider = await web3Modal.connect();
}
// Subscribe to accounts change
provider.on("accountsChanged", (accounts) => {
fetchAccountData();
web3Disconnect();
console.debug('Account changed to - ' + accounts);
});
// Subscribe to chainId change
provider.on("chainChanged", (chainId) => {
fetchAccountData();
web3Disconnect();
console.debug('Network changed to - ' + chainId);
});
} catch(e) {
eraseCookie('walletConnect');
console.debug("Could not get a wallet connection", e);
return;
}
await refreshAccountData();
}
/**
* web3 paswordless application login
*/
async function web3Login() {
if (!provider){web3Connect();}
let address=web3App.ethAddress;
address = address.toLowerCase();
if (!address | address == null) {
console.debug('Null wallet address...');
return;
}
console.debug('Login sign request starting...');
axios.post(
"/web3login/",
{
request: "login",
address: address
},
web3App.config
)
.then(function(response) {
if (response.data.substring(0, 5) != "Error") {
let message = response.data;
let publicAddress = address;
handleSignMessage(message, publicAddress).then(handleAuthenticate);
function handleSignMessage(message, publicAddress) {
return new Promise((resolve, reject) =>
web3.eth.personal.sign(
web3.utils.utf8ToHex(message),
publicAddress,
(err, signature) => {
if (err) {
web3App.loginstate = "loggedOut";
console.debug('' + web3App.loginstate);
}
return resolve({ publicAddress, signature });
}
)
);
}
function handleAuthenticate({ publicAddress, signature }) {
try {
if (!arguments[0].signature){throw "Authentication cancelled, invalid signature"; }
if (!arguments[0].publicAddress){throw "Authentication cancelled, invalid address"; }
console.debug('Login sign request accepted...');
axios
.post(
"/web3login/",
{
request: "auth",
address: arguments[0].publicAddress,
signature: arguments[0].signature
},
web3App.config
)
.then(function(response) {
console.log(response);
if (response.data[0] == "Success") {
console.debug('Web3 Login sign request authenticated.');
web3App.loginstate = "loggedIn";
console.debug('' + web3App.loginstate);
web3App.ethAddress = address;
web3App.publicName = response.data[1];
web3App.JWT = response.data[2];
updateGuiButton('web3login','Logged in as ' + web3App.publicName,true);
updateGuiButton('web3logout',false,false);
}
})
.catch(function(error) {
console.error(error);
updateGuiButton('web3login','LOGIN',false);
});
} catch(err) {
console.error(err);
updateGuiButton('web3login','LOGIN',false);
}
}
}
else {
console.debug("Error: " + response.data);
}
})
.catch(function(error) {
console.error(error);
});
}
/**
* web3 Disconnect wallet
*/
async function web3Disconnect()
{
console.debug("Killing the wallet connection");
// TODO: Which providers have close method?
if(provider) {
provider = null;
await web3Modal.clearCachedProvider();
}
localStorage.clear();
selectedAccount = null;
updateGuiButton('web3connect','CONNECT',false);
console.debug("Disconnected");
}
/**
* web3 Logout
*/
async function web3Logout()
{
console.debug("Clearing server side sessions...");
fetch('/web3logout/')
.then((resp) => resp.json())
.then(function(data) {
// logged out
//
web3App.loginstate = "loggedOut";
web3Disconnect();
console.debug('' + web3App.loginstate);
updateGuiButton('web3login','LOGIN',false);
updateGuiButton('web3logout','LOGOUT',true);
})
.catch(function(error) {
console.debug(error);
});
}
/**
* pay button
*/
const initPayButton = () => {
$('#web3pay').click(() => {
if (!provider){web3Connect();}
console.debug('Requesting transaction signature...');
const paymentAddress = '0x';
const paymentAmount = 1;
web3.eth.sendTransaction({
to: paymentAddress,
value: web3.utils.toWei(String(paymentAmount),'ether')
}, (err, transactionId) => {
if (err) {
console.debug('Payment failed', err.message);
} else {
console.debug('Payment successful', transactionId);
}
})
})
}
/**
* update gui buttons
*/
function updateGuiButton(element,text,status)
{
if (text)
{
$("#" + element).val(text);
}
// disabled button=true
// enabled button=false
if (status==true)
{
$("#" + element).prop("disabled",true).css("cursor", "default");
} else {
$("#" + element).prop("disabled",false).css("cursor", "pointer");
}
}
/**
* debug logger
*/
(function () {
var logger = document.getElementById('log');
console.debug = function () {
for (var i = 0; i < arguments.length; i++) {
if (web3App.debug)
{
console.log(arguments[i]);
if (typeof arguments[i] == 'object') {
logger.innerHTML = (JSON && JSON.stringify ? JSON.stringify(arguments[i], undefined, 2) : arguments[i]) + '\n' + logger.innerHTML;
} else {
logger.innerHTML = 'Web3App : ' + arguments[i] + '\n' + logger.innerHTML;
}
}
}
}
})();
I started this blog way back in 2010.
For years, it was a place where I shared thoughts, ideas, and whatever else was on my mind.
But then it kinda got left behind, life happened, things changed.
And like many others, I just stopped posting here.
At first, I blamed it on being too busy.
Then on not having anything to say.
But if I’m honest, the real reason has become clearer lately:
The internet has changed.
In 2025, it feels like blogs — once vibrant, personal spaces — have been pushed to the sidelines.
Google has changed too.
It’s not the same search engine we use to know, now, it feels like AI is answering everything before you even have a chance to click a link.
People don’t “browse” the web like they used to.
They ask AI a question and get an answer immediately.
Quick.
Effortless.
Efficient.
Who needs to scroll through endless blog posts anymore?
Right…?
But here’s the thing.
Last night, I realised something.
Maybe blogs aren’t dead.
Maybe they’ve just evolved — like everything else online.
A blog is still a place where you own your words.
Where you choose the conversation.
Where you’re not boxed in by an algorithm, a feed, or a chatbot’s summarised answer.
AI can give you an answer.
But a blog gives you a person.
A real voice, not just a response.
So maybe it’s not that blogs are dead.
Maybe we just forgot why they mattered in the first place.