We are still being targeted by a Distributed Denial of Service attack, or DDoS for short. It has been taking our network down every four hours, on average, for a total duration of one to three minutes. Sometimes longer, sometimes shorter.
The original course of action, was to simply try and wait things out. Most attacks of this nature tend to stop after a short period of time, as the attacker gets bored, and simply moves elsewhere. However, we are now of the belief that these attacks are targeted specifically due to our affiliation with a third party wholesale marketplace, and our use and promotion of “lifetime” hosting services.
The attacks first began in November of 2023, which took our network down for several days. The original attacks also coincides with the closure of another popular hosting provider, which promoted their services on the same wholesale marketplace as DoRoyal. We have reason to believe that the person responsible was originally a customer of this now-defunct web host, and has opted to take their frustrations out on our network in the form of a large-scale DDoS attack.
Moving forward to March 2024, we began to detect additional network activity far beyond the norm. However, none of our automated security systems or firewalls flagged this activity as being malicious in nature, and treated it like typical web-based traffic. As time went on, however, DoRoyal staff and network administrators began to notice unusual load abnormalities, which eventually resulted in several minutes of downtime. During these downtime events, the load of our server will spike to over 130, with network activity also increasing. Specifically, we detected an increase in incoming net traffic of 250 – 800 Mbps. Due to the way this traffic was sent, our server did not immediately detect it as an attack. By the time it did, however, the load averages were also high enough that our firewall failed to effectively and efficiently block and mitigate the attacks. By the time it did (an average of ten seconds after the attack commenced) an outage was already logged, and continued to occur until the server load normalized one to three minutes later.
MITIGATION
We have been using all of the usual and recommended methods of mitigation. However, this attack is not reliably or accurately detected by our datacenter-level firewall, thus no DDoS mitigation is being triggered from their end. Instead, we are being forced to rely entirely on server-side mitigation techniques. While we have been successful in stopping the attack within seconds of it occurring, it is simply not fast enough to avoid a service outage.
SOLUTION
We are looking into three potential solutions to this ongoing problem. Each solution has potential downsides, as well as their own unique upsides and general advantages.
Attempt To Salvage Royal 2
This final soluton is to perform additional optimizations on our firewall, and hope that one of them is able to effectively and reliably mitigate these attacks. This is what we’ve been trying to do so far, and have had no real success. However, it’s still possible to accomplish.
Reset Royal 2 Network
This solution would have us decommission our current Royal 2 web server, and effectively bring an entirely new server online, with brand new IP addresses, and brand new yet similar hardware. This server would effectively be a carbon copy of our current Royal 2 server, albeit with an entirely new network card and interface.
Decommission Royal 2 – Introduce Royal 3, Royal 4, Royal 5, Royal 6, etc
This solution would have us fully decommission the Royal 2 web server, and bring several additional servers online. While these servers will not be as powerful as our current Royal 2, we would end up moving all accounts across different servers, depending on load averages, network utilization, and account size. (Royal 2 is optimized for 12,000 websites)
We will share more as soon as we’ve determined the proper course of action.