Geopolitical considerations as part of Technology risk

This thread started off as a discussion at the local Mamak (the Malaysian colloquial terminology for your cafe). A bunch of security and tech folks meet up to ponder upon the world and business woe.

The discussion started off with the question “How do you decide on your tech purchase? What are your consideration factors?”

Our conservative buddy came up and said “You can never go wrong with Brand X! Tried and tested”. That seems to indicate that the selection criteria is based on market presence, branding and prominence. As well as adoption.

The bleeding edge/challenge the status quo person came up and said “Why not Open Source?” Its mature enough for adoption and more organisations are cozying up to the idea that Open Source will work, provided that support is available.

Then comes in the CIO, whom made it clear that his/her choice will be cost based. Why bother paying premium and consider alternatives when you can get a good bargain at a reasonable choice? Pricing would be the ultimate deciding factor, provided that it meets bare minimum.

I had to open my mouth and ask, ”what about geopolitical consideration?” Everyone had a flustered look, some in amazement and some pretended that was not even the case. Geopolitical? Is that even necessary?

What is geopolitical consideration/risk?

This is a consideration when you view the origin/source country of technology and consciously make a decision to use technology from another country. Example, if the first tier of firewall originates from US, the second tier of firewall may be purchased from Russia (ignoring the underlying hardware may all originate from China, the consideration here is based on vendor origin, not part origin, although that may be a severe version of geopolitical based risk separation).

History Lesson – PGP

A little bit of history lesson on technology, starting from cryptography. PGP was created by Phil Zimmerman in 1981. PGP was created with the intention of securing communications between activists and to prevent snooping. The software was free to use, as long as its not for commercial use. Eventually PGP ended up on the Internet, being adopted for widespread use as an added encryption layer on top of emails.

In 1983, Zimmerman became target of prosecution. Cryptographic capabilities above 128 bits became subject to export restriction and Zimmerman’s PGP was using keys with defaults of 1024. Zimmerman became a target, due to violations on “munitions export without license”. Definition of munition includes “guns, bombs and even software”. For unknown reasons, the case never proceeded and was eventually dropped without any criminal charges filed.

Zimmerman was determined to make his software public. He identified a loophole, in which the First Ammendment, protects the export of books. Through MIT Press, Zimmerman published the source codes of PGP. One had to simply procure the book, scan the contents and make it digital using OCR (Optical Character Recognition); or simply type the code into a program.

More challenges on export

A similar situation happened to D.J. Bernstein. He wanted to publish the source code of his Snuggle encryption system. Together with EFF, DJ Bernstein challenged the export ruling. After 4 years and one regulatory change, Ninth Circuit Courts of Appeal ruled that software source code is protected by the First Amendment, and government preventing the publication is unconstitutional.

Why geopolitical risk?

The world is already borderless. Technology crosses boundaries easily without much hassle. However, G2G relationships are never that smooth (G2G – Government to Government). Technology sold by a company is governed by the laws in which that company is HQ’ed. Hence indirectly, law of the land plays an important role in ensuring that governments play an indirect crucial role in determining the availability of technology.

The most common technology denominator is the USA. USA produces majority of technology innovations which the world uses. An example used in the earlier part of this article is encryption/cryptography technology. As algorithms become prevalent, the use of these algorithms often become subject of export restrictions.

The rise of nation states

Borderless world creates borderless problems. The hacking scene (not the “Texas Chainsaw Massacre type”) used to be fueled by hormone-raged idealistic filled teens, or just curious cats trying to learn tech. But today, dominance in cyber space is seen as a sign of “Cyber-sovereignty” and arms race towards cyber dominance becomes imminent. (Man I really abuse the cyber word this time…)

As explained earlier, the battle ground has shifted into the cyber world. Corporates are becoming the unwilling victims in the fight towards dominance. Nation-states may infiltrate large corporate organizations in order to further their agenda, by implanting their tech folks which directly influence the product build. This means that product that gets shipped out may potentially be inhibited with malicious code, backdoors or even intentional vulnerabilities in order for nation state actors to freely abuse.

Export laws, sanctions and politics

Open any news site right now and you’d hear about trade wars between government. In the recent news, one government has stood firm and taken actions against another country for alleged espionage. This resulted in key companies in the country being denied business and imposed high levies and taxes. The situation created a “tit-for-tat” reaction, causing a downward spiral of impact towards other organizations which forms part of the ecosystem.

Standards and tech volition

If export restriction becomes apparent, in a new twist to the developing stories, standards organisation are now becoming subject of such ruling. One standards body which is referred to worldwide has stepped up and imposed bans towards researchers from a said country from being moderators or participating in standards building. This has far reaching impact to the global community.

Firstly, other countries who are not part of the trade war are now unwilling victims as the standards body align themselves towards the country stance. Secondly, the countries now have to re-evaluate and establish their own standards, or subscribe to a common standard which all vendors should be given a chance to participate. ISO (International Standards Organization) is a global standards body which prides on being independent from country level politics (while the standards being voted are based on country lines and affiliations).

In one hand, you need a standards body as reference point, and in another you’ll need to start excluding standards body which shows affiliation towards country level policies. Aligning standards into a country specific set will be another arduous task.

Long story short

Countries today can no longer exclude geopolitical factors of risk. This is prevalent today, looking at the recent developments in the international arena and current trade wars and Brexit. While moving towards Industry Revolution 4.0, it is important to no longer be in a nutshell, but understand that borderless is a reality and new sets of regulations are emerging to govern tech and it’s use.

Do you need BCP for Cloud?

I woke up feeling very warm. I thought I missed the alarm, but its just 3:23 am. Very sure I don’t need a potty break, extremely sleepy and obviously upset. Leaned over to see the AC (air-condition), and I found that it was off. I’m very sure its too warm and by now the AC should have kicked in. Mumbling, I woke my already tired and weary body and walked towards the thermostat to see what’s happening.

After blinking a few times to get my sight back to normal, I found that the Nest thermostat isn’t working. Walking back to my bedside table to grab my phone (I know, it’s a bad habit), I checked to see if the internet was down. WiFi seems up, checked my public IP (instead of good ol’ ping), everything seems okay. Google search shows up okay. Still with sleep in my head, I rummaged through my bedside drawer for the remote and turned it on. “This is too much work” – grumbled my half sleepy head. That’s enough for the night.


Woke up in the morning with a sleep hangover (yes, its possible, when you don’t have enough sleep), I was trying to figure out what happened. Turned on twtr and true enough, reports on Google Cloud services failure starts trickling in.

URL: https://status.cloud.google.com/incident/compute/19003

The horror! Google Cloud services went down?

*My panicked head screaming – The sky has fallen! The Sky has fallen!*

This pretty much explains why the thermostat went down. I wondered how may threat actors lost their C2 hosted on Google Services, how many IOT devices like the Nest Thermostat stopped working and other dependent service. If as an end user I am grumbling on the service availability, how about corporate organisations relying on Cloud services ?


Today’s organization rely heavily on cloud. Business today runs on cloud. Social media runs on cloud. Almost everything runs on cloud. Whether it’s server/virtual servers, serverless, functions (you name it), runs on cloud. (Disclaimer, most of my stuff also runs on cloud…)

But, is cloud outage a rarity? Well it depends on what you deem as rare. The Internet forgives, but never forgets. In August 25, 2013, AWS suffered an outage, bringing down Vine and Instagram with it. March 14, 2019, Facebook went down, bringing WhatsApp together in an apparent server configuration change issue.

The impact is obvious, business will lose revenue when the services goes down. Local franchise such as AirAsia, runs their kit mostly on Cloud. The impact is devastating, imagine booking of flights goes dark. So does a lot of other business. Hence this brings an interesting point: What is your business continuity plan if cloud goes down?

When I had this conversation a few years ago, most CIOs I spoke to boldly claim that their BCP is the cloud (we never reached the part about cloud and security because its most often dominated by the cost debate). There is no need, due to the apparent global redundancies of cloud infrastructure. The once-sleeping-soundly-at-night CIOs are now rudely awaken (just like me, due to the broken thermostat) that cloud no longer offers the comfort they can afford, after investing years of CAPEX (capital expenditure) and happily paying cloud services their monthly dues to show that their services are up.

Few points to note for those interested in even thinking about Cloud BCP. Yes, its time we take the skeletons out of the closet and start talking about this.

Firstly, can your application and services run a completely different cloud provider? Let’s look at the layers of services before we answer this question.

XKCD - The Cloud

If you are running server images (compute cloud), it’s completely possible to run in a different cloud provider. You’ll need to be able to replicate the server image across cloud provider. You can archive the setup of your cloud server via scripts, create a repository to host your configuration files and execute the setup script to bring up the services in a separate cloud provider. The setup and configuration can be hosted in a private git/svn repository and called up when needed.

What about data? Most database services provide for replication and data backup services. For “modern” database services, data can be spread across multiple database for better data availability and redundancies.

The actual stickler for hybrid cloud is serverless/function based hosting. If the organization invests heavily in one particular cloud provider’s technology (without naming any particular provider), then it depends on the portability of that technology. If something common such as Python is used, the portability is pretty much assured. Technologies that are exclusive for a cloud provider will have issues of portability across different cloud providers.

Another question that needs to be answered is, how would you “swing” your services across different cloud providers? A common approach for internet availability is to use DNS services. Using DNS, the organization can change the location of services by changing the DNS records. This would allow seamless failover without having to change the URL. However, speed of failover will be determined based on the DNS TTL (time-to-live) configuration of that record. Too low, your DNS will be constantly hit with queries, but changes are almost instantaneous (usually a low TTL is around 15 to 30 minutes). Too high, your DNS infrastructure will have low traffic, but takes a long time before the failover actually happens. DNS based failover also creates administrative headache for firewall administrators as they have to change their approach from IP based to a DNS based access control list.


All of cloud isn’t just hot air. Moving towards Industry 4.0 (now I’m just throwing buzzwords around), Cloud adoption is definitely a core component of the technology strategy that each organisation needs to have. As times goes by, we find that even cloud is fallible, hence a proper approach towards Cloud is key in business continuity.

So, what’s your approach towards Cloud Services BCP?