Yet another Facebook leak… 533M records!

Almost everyone on this planet, including their dog, cat, pet parrot and all other being is listed on Facebook (but this also means other social media, not at the scale how penetrative Facebook is).

Started off as a college fling tracking site, Facebook quickly outgrew its pubescent phase and matured as a global social media giant. This, as willing John Q. Public happily providing their personal data (and scary at times). Facebook quickly became an advertisement darling and a platform for marketing, social outreach and often information warfare battlegrounds (as seen recently during the last US presidential election campaigns.

In 2019, there were 2 breaches that affected Facebook. One in March/April and the other in September. The most recent one affecting 533M records (supposedly), was slated to be due to the September incident. However, a more detailed view reveals that the vulnerability may be lingering since 2012!

The March/April breach (which Facebook claimed has addressed) seem to have been due to its own API abuse. The Graph/Marketing API was seen abused, also attributed to the Cambridge Analytica debacle as well. Facebook stepped in to disable its “supposedly” harmful API to prevent further abuse, but not without receiving backlash to the extent of what Cambridge Analytica had caused damage.

Lucian Constantie, a senior writer for IDG News Service wrote on ComputerWorld (8 October 2012) that an independent researcher Suriya Prakash found a vulnerability via Facebook’s Mobile site. Facebook allows users to associate their contact list with existing Facebook users account. Facebook, earlier, had requested users to submit their mobile number in order to enable SMS based 2FA to protect their accounts. Now that Facebook has contact information, it also provided users an option to search for other users by specifying their number. To make it easier, a setting was introduced. In facebook, a user can head on to “Privacy Setting” > “How You Connect” > “Who can look you up using email address or phone number you provided” with the default setting of “Everyone” (!)

This means that even if you set your phone number visibility to “Me only” on your profile page, anyone who knows your number will be able to look you up unless if that setting was changed accordingly. Most people, unaware of this would leave the setting default, falling prey to this type of attack.

Suriya Prakash claimed that he shared the information with Facebook Security team in August and after an initial response on 31 August, his emails seemed to have ended up in /dev/null. A facebook representative responded and said that the rate of a user being found is at a restricted rate.

This became the actual issue which caused the most recent data breach for Facebook. Facebook however claimed that there were no hacking, and that this was just another scraping method. Scraping, is means of obtaining information crawling through the site. However, from my assessment, I find it more closer to an IDOR (Insecure Direct Objet Reference).

In a typical IDOR attack, the attacker simply enumerates the object, by incrementing the ID number. e.g. http://website/id=1

The ID value is incremented, revealing all other objects until the enumeration is complete. In this case, the ID happens to be the mobile number. The attacker created a phone book with ALL possible phone numbers, uploading to Facebook and referencing it against Facebook’s own database. Based on the numbers enumerated, one of the victim of this attack is Mark Zuckerberg himself, later identified having Signal app running on his phone (surprise, surprise!).


Hacker vs. UniKL – TA perspective

Editor note: As part of responsible disclosure, the matter has been sent out to MOHE IT/Network Security and MyCERT with the reference number of MyCERT-202103221082. I recently got contact of the CEO of UNIKL and the article was forwarded to him for further action. 

In most breach stories, we often hear one side of the story. Since I reported the breach, UniKL has not yet reached out to me, nor any press release was seen regarding the matter. As observers, you only see the well drafted press release, often concealing the details of what happened. The extent of any incident is only determined when and if the attacker decides to publish the data. While I had no intentions of writing anything further on this matter, a close peer nudged me and said that I should write a second piece on this story. There wasn’t much to pursue, but fate has it, had other plans.

As the earlier article went live on Linkedin, the attacker, Marwaan (I think i spelt it right) came publicly, replying to the article thread and responded to the thread, claiming responsibility. This is a rare opportunity, providing all of a look into the attacker and the attack. Marwaan agreed to an interview. The full length of the interview will be published by SecurityLah podcast.

But first, if someone claims responsibility, I need to be certain about the claim. Trust, but verify. So i asked for some proof on unpublished information that would validate Marwaan’s claim. A screenshot was provided, attached as below.

That pretty much, to me, confirms that he is indeed the attacker, or at best, someone who has access to the data. Good enough for me. Let’s continue.

(No spoiler’s here, but listen to the full length interview, to be published in 2 parts starting Monday 29 March 2020. at SecurityLah)

I was curious about whether Marwaan had indeed contacted UniKL regarding this matter. I asked him proof of the issue, and he provided screenshots of emails sent to UniKL pertaining to this matter.

From this, it seems to corroborate Marwaan’s narrative that he reached out to UniKL regarding the system weaknesses.

Some key pointers I picked up throughout the interview. UniKL seemed to have taken the matter lightly, and not done an assessment and full incident response. Marwaan also confirmed that based on his knowledge, there is only an IT and Communications team, but clearly lack presence of a cyber security team. Marwaan went on to explain that they had taken the “google” approach of search and deploy controls without understanding what needs to be done, and at times blindly trusting information provided by Marwaan.

This doesn’t bode well for UniKL, which seem to have been seen not managing the situation and respond accordingly. I’m happy to be able to get details from UniKL to present a balanced view on what happened, from their perspective, with necessary proofs to back their claim up, just like what I did with Marwaan. So far, whatever that has been shown, seems to put UniKL in a negative light.

2 key issues i picked up from this incident for this article.

First, incident reporting. Do organizations have a way for general public to report cyber security incidents? When I googled “UniKL report cybersecurity incident”, I see links of UniKL and its cybersecurity programs, but not actually anything related or allowing general public to report cyber security incidents. A case of not practicing what they preach as they teach cyber security? This certainly erodes my confidence to even think of studying there, especially cyber security. Marwaan also explained that he was given the run around, with staffs not even knowing what to do when someone reports such issues.

Do your organization suffer from such problems? Only you know.

Secondly, organizations are ill-prepared to face such issues. Incident response and coordination needs severe improvement. In any instance, when such incidents happen, organizations will alert key stakeholders on the incident, prepare a holding statement to manage the press and issue a first stab at this matter. The approach of “lets-be-silent-and-this-will-go-away” usually ends up making the organization guilty of concealment, lowers trust on the ability of the management and creates opportunity for further speculation. At this point, I have written 2 articles and data of the students, staffs, bank details may be circulating somewhere, which opens up opportunity for future attacks to be even more deadly. Imagine if the bank account was cleared as the attacker has access to the machines logged into the bank portal?

What can happen from here?

It all depends on the authority. The spillage of the attack has been confirmed to even hit MOHE, which creates high severity of this matter. MyCERT has been involved (or notified, by myself and also the attacker), and will most likely issue a holding statement, if this matter blows up. The Personal Data Protection Commission is yet to be seen on this matter (I wonder if MyCERT will reach out and inform them and do a joint investigation). While MOHE falls under the category of CNII (Critical National Information Infrastructure), UniKL doesn’t. However, by nature of processing personal data, UniKL will come under PDPA requirements.

Malaysia lacks reporting requirements for breaches. FireEye made the Solarwinds hack announcement as part of SEC filing. It’s time Malaysia starts looking at something similar, or better. Until such regulations become mandatory, we will continue to see organizations sweeping such issues under the carpet, paving way for more deadly and catastrophic attacks to be possible. We as a country may have recently launched a strategy, but it remains a strategy until something firm is implemented and enforced. Logically, the world is facing a pandemic, and the focus is on the issue, but cyber threats don’t look at whether it’s pandemic, or not, will continue to persist.

I’m sitting by the sideline, with my box of [redacted] popcorn watching to see how this unfolds. One thing’s for sure, there are tonnes of wisdom to be learnt from this incident.


  1. UniKL Hack – Dr. Suresh Ramasamy –
  2. CNII – CyberSecurity Malaysia –

Singtel breach (2021) – case study

What happened Singtel?

Singtel, in a report, released a statement that they are currently investigating a data breach involving customer data. For those who aren’t familiar, Singtel is a Singapore based group of telecommunications companies around Asia, as well as a telco licensee in Singapore.

Singapore was notified by Accellion that the data breach occurred due to its file sharing system. The system was breached by unidentified threat actors (aka hackers). Singtel explains that it’s a standalone system and its used to share information within and with external parties.

Singtel explains that the use of Accellion FTA product was legitimate and had support running till April 2021. In mid-December 202, Accellion had issued a patch within 72 hours of the zero day notification. Accellion had noted attacks based on the reported zero days till end of January 2021.

What about Accellion?

Accellion, through its own website had a press release on the matter.

Interestingly, Accellion made a clear note that the product affected was a 20 year old “approaching end-of-life” product. Typical corporate sales techniques, Accellion uses this opportunity to urge its customers to migrate to its newer platforms. Interesting to note that Accellion makes it clear that the FTA platform is “legacy” and implies that, while the product is under support, organizations should either have migrated across to newer platforms or start doing so (preferring to “upgrade” to its own new version).

Analysis of the incident

Lets look at each part of this and the claims made by the respective organizations.

  1. The FTA system is a standalone system.

My assessment? True and False.

Lets look into the function of the FTA. Essentially its an FTP (file transfer protocol) server used for transferring files in and out of the organization. There seems to be some issues with this setup. Singtel further explains that the platform is used by both internal and external parties.

Did anyone notice a huge blinking red flag here? No? I’ll explain why.

In a typical telco setup, these FTP servers are crucial part of the equation. CDR (call data records) are often put into FTP servers before it gets passed to mediation and eventually billing and charging. Again, big red blinking light – CDR!!!

Why would file transfer be needed for external parties?

It’s used for many reasons, i’ll outline 2 as example. Firstly is bill payments. Some bill payments use REST API for immediate settlement, while others use bulk payment (aka batch) which uses file transfer via FTP. a bank may receive payments from respective customer and does update every night at 3am. Another scenario would an outsource arrangement involving a third party to perform corporate account provisioning, and then doing bulk activation based on the files provided.

Good hygiene practice, the file transfer platform should be completely separate and  isolated between internal and external parties.

Next, the question of whether the system is isolated or not. For me, an isolated system is a system that doesn’t have connectivity to any other systems, like a Windows 10 PC at home only connected to the internet. But a file transfer system? You can see that the system/network/security admins would have punched holes on the firewall in order for the system to be able to receive and transfer files. Yes, it is interconnected, but whether it can access the other interfaces (both ethernet and 3G specific) depends on what ports are open.

2. Usage of legacy platform.

This is where both parties seem to have differing views. Singtel seems to think that the product is supported (noting that EOL is around the corner), hence safe to use. Accellion however minces no words and blatantly put legacy tag to the platform.

Logical ensuring question – why didn’t Singtel migrate their platforms to a newer one? (This is the part where i throw theories into the equation, only Singtel would know the real reason)

Firstly, don’t fix what’s not broken. Remember it’s a 20 year old platform, and assuming that Singtel had used for half of it’s useful lifetime, that’s easily 10 years! The folks who provisioned and configured the platform may have moved on, or even retired! So, it works, it continue to work hence don’t touch!

A system migration can make or break a CIO/CTO’s career. We look back at statements made by Singtel. The FTA platform is used internal and external parties. This means firewall rulesets needs to be migrated. New service accounts need to be created. Permissions need to be mapped. Application ID’s need to be created. Batch jobs or cron jobs running in the server modified. God knows what else needs to be done! Now that’s just the internal parts. Minus the system, you’d have internal application owners screaming blood at you due to KPI missage!

The next big headache is coordinating initiatives with the external parties. I’ve had experience during migration where one of the external parties wanted to bill me for their migration! We, of course, declined politely and said that migrations are handled by individual organizations at their own cost (providing timelines to migrate across).

3. Why didn’t the patch work?

Singtel seem to indicate that the patch provided by Accellion didn’t work. Noting what Accellion mentioned, the patch was produced within 72 hours. One has to wonder if proper regression and quality checks were performed before patches were released. Reminds of Microsoft, who previously released a patch for a patch (in their credit, they’ve come a long way).


Tech debt is real, and in Singtel’s case just hit them with a huge interest. While one can argue its a zero-day issue, it is without a doubt that the legacy platform should have been managed out. Reminds me of the switch issue in MAHB? From a glance, seems like Singtel has lots of work ahead of them. They are moving in the right direction, I only hope they take a comprehensive look at their environment and not “scope down” into just the FTA.


  1. ZDNet: Singtel breach –
  2. Singtel Release:
  3. Accellion Press Release:
  4. MAHB Airport Case Study –
  5. Tech Debt –


Selayang Hospital IT system case study – Jan 2021

Digitization and Hospital Management

As part of digital move introduced by the former Prime Minister of Malaysia Tun Dr. Mahathir Mohamed, Selayang Hospital underwent a major transformation, introducing the THIS – Total Hospital Information System. This system was aimed to provide a comprehensive hospital solution that covers imaging and patient information. Based on research done to gauge the acceptance level and satisfaction amongst nurses in the hospital, the research shows marked improvement in turn around time and satisfaction level on the efforts of digitization.

That was early 2000.

In May 2019

Come to current times, in May 2019, New Straits Times reported that technical problems plaguing the once famed THIS solution that is deployed in Selayang Hospital. Since May 4th2, some 40% of elective surgeries were rescheduled due to system failure.

Ministry of Health was quoted by Bernama, stating that the elective surgeries had to be postponed due to lack of access to the pathological reports, critically for cancer patients. The technical issues also affected the hospital’s ability to service patients seeking outpatient treatment. The ministry added that patients seeking inpatient service, emergency treatment and surgical emergency had to be carried out manually (not sure what’s meant by manually…).

THIS system has had its challenges, however staffs working in the hospital mentioned that the experience in the 2019 outage was the longest that they had ever experienced (I read that as the system has failed before, just not that long).

A doctor was quoted to say that the computerization was done long time ago (quote year 2000) and the system has had numerous instances where the system was down, forcing the staff to manually chart down details. System downtime would last 6-7 hours. (Wow!)

Deputy Health Minister (then) Dr. Lee Boon Chye confirmed that it s an old system that needs to be upgraded and its being done as he is being quoted (2019). Patients were given the option of whether to continue or not with their appointment, which may lead to longer waiting time , or opting to reschedule at a later date.

Come January 2021

On 5 January 2021, Malaysiakini carried the report stating that Selayang Hospital had another downtime. The hospital had to access the records manually for the past 3 days due to failure of its IT system, burdening the already overloaded hospital operations. This time a system called PowerChart was blamed for the issue (Is PowerChart part of THIS? Or is a replacement?) PowerChart has been down since January 6, 2021. Hospital officials was quoted saying that the system is used to record everything, from patient history and their progress. This included lab investigations, CT scans, blood test results and reports.

Looking at what has happened, I’m wondering what’s happening at the Selayang Hospital. The hospital is already facing a pandemic situation, and IT system reliability is causing additional headache to the overburdened staffs.

Is this new?

There seems to be a spat of technology issues covering Malaysian healthcare. About one year ago, I wrote about issues plaguing Sg Buloh hospital which had IT issues due to continued use of Windows XP. Sg Buloh hospital was reported to be using Windows XP as their server platform, instead of Microsoft Server operating systems. This is also due to 32bit operating system limitation which had caused the problem to occur. I covered this in extensive detail in this article.

Some glaring questions on the system

  1. Is PowerChart a replacement for THIS? Or is it part of it?
  2. Has MOH completed the system upgrade, which it stated in 2019?
  3. Constant failures of IT systems at a critical national infrastructure warrants a relook at how these systems are selected, deployed and whether the effectiveness is there over time.
  4. Based on the 2019 report, it seems that the system may not have the necessary hardware or software upgrades. Is capacity management and system availability tracked, measured and actioned upon?


In order for healthcare to serve the population effectively, there is a serious need for healthcare systems to be functioning and working in prime condition. There can be no lapse, as these situations create more opportunity for healthcare failure and workforce exhaustion. Systems are meant to ease the burden and workload of the hospital staffs, and should rightfully function in effective manner.


  1. Malaysiakini (9 Jan 2021) – Selayang Hospital’s IT system down for 3 days and counting –
  2. New Straits Times (17 May 2019) – 22 year old Selayang Hospital getting much needed technical upgrade –
  3. Rosnah H., Zubir M.A., Akma Y.N. (2004) International Perspective Selayang Hospital: A Paperless and Filmless Environment in Malaysia. In: Ball M.J., Weaver C.A., Kiel J.M. (eds) Healthcare Information Management Systems. Health Informatics Series. Springer, New York, NY.
  4. Mohamad Yunus, N., Ab Latiff, D., Abdul Mulud, Z., & Ma’on, S. (2013) Acceptance of Total Hospital Information System (THIS), International Journal of Future Computer & Communications
  5. Sg Buloh Hospital: Jan 2020 Case Study –


Privacy – White elephant in the room with COVID-19?

If it’s one thing life has taught me, “almost” everything has a price. For a good sum, you can get a person to sell his phone. For others, something else. It’s known fact that we live in the world of data.

Everything we do today generates data. Every step you take, every move you make (no its not a song), every interaction. Our lives have become a digital data lake, filled with details of what happens. Data can be in many forms. Audio – forms of conversations. Video – CCTV footages. Logs of transactions, usage, and patterns that is formed based on behavior.

In 2018, Strava, a company that produces fitness tracking solution had inadvertently revealed secret military base due to its users heatmap. The visualization component of the app provided heat maps on user clustering, which indicated secret military presence. This wasn’t the outcome Strava had foreseen, but undoubtedly become prevalent.

I have a saying about data. “Once you create data/information, you are forever doomed to tend to it till it ceases to exist”. Something like Sisyphus, who was condemned to rolling the stone up the mountain, only to find it back down the very next day.

If you attend a conference and visit the booths just for a “look-see”, you most often find a simple glass bowl. In it, a stack of name cards. Name cards are wealth of personal information. One could argue that its corporate information, which is apt. However you’d also find mobile phone numbers. Unless if those are company issued (remember the good ol days of Blackberry?), you’ve just handed over your personal mobile phone number to (not one) but countless number of individuals who will have access to that information. Ever wonder how a completely unknown sales person calls you up for similar products…. *cricket sound*

Ironically all that for a “possibility” of winning <insert the latest gadget name> or a booth token/premium. I remember the time when we went to a week long debate about managing personal information in the form of name cards in context of whether it is a business or a personal venture during the implementation of Malaysia’s PDPA for a telco, together with then the Commissioner of PDPC.

With MCO, contact tracing became a “new normal” (see I can also do buzzwords). Contract tracing is when the outlet you visit requires you to put your details such as name, phone number and temperature. It’s implemented quite simply, using a piece of paper or a book with the visitor jotting down his/her details. Just hypothetically, if you see a person of interest walking up to the same outlet, all one has to do is glean over and note down the number that was written on the contact log. There’s 2 school of thoughts on this matter. First, the contact details given, in some instances, are bogus to prevent such exact situation, which sadly defeats the purpose. Second, it’s a requirement, hence the burden of protecting such information belongs to the establishment collecting that information…. *cricket sound*

Point to note, what happens after MCO? Is the log book going to end up in a dump somewhere with all of the contact details?

What about contact tracing apps? I’d like to cite the example of AarogyaSetu app from India. When it was initially launched, the creators were barraged with queries of privacy and surveillance, which eventually lead to the app being open sourced. While the code was open sourced, upon closer inspection reveals that it has a few critical missing parts, and also found that it was retaining logs of other devices the app had come into contact with ( a database inside the app stores all of the Bluetooth addresses). The internet community celebrated its victory, having to compel the authors to publish the codes on Github.

The importance of having such applications being code published are a few. The codes allow the collective hive of internet to find any potential bugs or issues which allows the app to be improved and become safer over time. The transparency of code helps to allay fears of surveillance and privacy concerns. There’s research done on privacy preserving scheme which can be used to ensure that the app only captures relevant information. In an increasing rise of police state, such as the Black Lives Matter movement (re: George Floyd) in the US and worldwide, having such steps shows commitment of the respective nation states to their rakyats (meaning citizen in Malay). It’s been seen that data leakages happen due to poor security on the backend (such as exposed data buckets on the internet).

There is no doubt, the new normal has everyone getting adjusted into doing things differently. But that doesn’t mean privacy needs to take a back seat. Things can be done in proper manner, just needs some serious thought through. Age of smart phones has made it much easier for anyone and everyone to do contact tracing easily, but it also comes with serious fore-thought for it to be effective.

In Malaysia, we have a number of mobile apps. State government of Selangor published the “SeLangkah” app to do simple contact tracing. Malaysian central government introduced MySejahtera and MyTrace for COVID-19 tracking. MySejahtera has been seen to be adopted as part of the wider strategy while SeLangkah seems to be most retailer’s choice.

With these apps in place, in Malaysia, there are questions left hanging

  1. What are the security considerations and controls put in place to ensure that the mobile application is secure?
  2. Will the codes be published (quoting the Minister of MOSTI who made the statement on 10 May 2020) ?
  3. Where is the data captured by these apps stored? Are those storage secure? Who has access to those data? What type of data is secured?
  4. How is the security of the backend servers and services of these mobile applications?
  5. Has the mobile app undergone necessary security validation (i.e. vulnerability assessment/penetration testing/code audits)?
  6. What happens after the Movement Control Order (MCO) dismantled? What happens to the application and the data being captured? Whose responsible in enduring that the data is not kept beyond its use and disposed securely?


1. Strava fitness band gives up military presence –

2. Myth of Sisyphus –

3. AarogyaSetup Android app Github page –

4. DP3T – Decentralised Privacy-Preserving Contact Tracing –

5. Minister allys privacy fears in contract tracing –


Hackers for Hire – The case of Dark Basin

Mad kudos to Toronto based Citizen Labs for this excellent work!

Citizen Labs just published (about 13 hours ago) an expose of an Indian company, dubbed as ‘Dark Basin’ which is responsible for hacking thousands of individuals over six continents. The victim list isn’t just random joes, but public figures, rich and the affluent, NGOs including Electronic Frontier Foundation (EFF).

I wasn’t really surprised when an Indian company was coined for this. India is known as tech factory, producing software development and technology talents which went all over the world. And this included the dark side of technology.

Not too long ago, I was involved in forensic investigation of a high level intrusion, affecting Board of Directors and Senior Management of European  region telecommunications provider. Working closely with the law enforcement agencies, we were able to trace and eventually find out that the perpretrators were from India and had vast infrastructure for such clandestine operations. The Norman Hangover report was published, detailing the bits and bytes of the attack.

Back to the recent expose; the company used a variety of methods to target their victims. Primary mode of attack is through phishing. Their effective rates were high, simply because they were extremely persistent. They would do intelligence gathering of the clients, and attempt multiple times from different angles until the clients fall prey to the attack. On the background, a server is set up to masquerade as valid login pages, such as Google, or Facebook. Once a victim enters their password, their credentials are exposed to the attackers and its used for whatever other purposes deemed fit. In some attacks, these attackers were seen using these illegally obtained credentials to send out phishing email to other related entities, making them also fall prey to the attack.

These series of attack is attributed with high confidence towards Belltrox InfoTech Services. The attribution is made based on a few factors. The domain previously used by Belltrox – was registered by the email address from Yahoo, which was also used to register other phishing sites. Eventually this email address was changed. Operating hours of which the phishing emails were sent correspond to IST – GMT+0530. References to Indian festivals were made on their URL shortener (powered by phurl). Incidentally the same URL shortener was used by the attackers to link back to their CV. The founder of the company – Sumit Gupta (named as Sumit Vishnoi in DOJ documents) were previously indicted on hacking-for-hire scheme. In short, they were identified due to severe lack of opsec (I think the staffs didn’t know that they were suppose to keep it hush hush).

LinkedIn provides wealth of information about Belltrox and its circle. Based on the recommendations received by Belltrox and its staff, its clear that Belltrox has been working with private investigatiors and government agencies. This includes Canadian government officials, local law and state enforcement agencies and former intelligence agency staffs who are most likely gone professional.

Victimology indicates a large pool of diverse targets, which shows that the nature of business is not specific, but demand driven. This includes NGO that goes after large corporations, such as the #Exxonknew campaign. Interesting targets to note including friends and family members of those involved in the campaign, including the legal counsel.

At this point, it is certain that Belltrox is the source of the phishing campaigns. Who hired them still remains unknown. Sumit Gupta, when contacted, denied of any wrongdoing and stated that his firm assists his clients to retrieve emails for private investigators based on the credentials provided. (Yup…. eyes rolling here).

Belltrox has a wide range of industries where their target resides. This includes short sellers, hedge funds, financial journalist, global financial services, legal services, Eastern and Central Europe, Russia, government agencies and even individuals involved in private dispute.

Tools Techniques and Procedures – aka Tradecraft

Key modus operandi of Belltrox is phishing. They deploy a number of phishing kits (which they even leave it open/available). To power these phishing kits, a URL shortener is used. The URL shortener is based on a package called phurl, which creates a sequential numbered shortened URL, which makes it easy for the good guys ™ enumerate and identify what are the actual long URLs. Through this, the list of domains used for phishing is identified.

While phishing isn’t really new, this revelation strengthens the idea that phishing is very much relevant and effective. Login pages of commonly used services such as Google/Facebook is hosted, creating the opportunity for the attacker to capture credentials.

Hacking-as-a-Service (HaaS) – Global issue

HaaS is becoming a global thorn in the cyber realm. Emergence of such players, including reports on the based on Dark Matter highlights a lucrative market for such services, and that while the service remains clandestine, demand and need for such services continues to thrive. Legal frameworks are still developing over the need to handle and dismantle such services.

HaaS also presents an issue for attribution of attack. In this case, Belltrox was coined as the attacker, but the actual puppet master remains hidden. This can apply for nation state sponsored attacks, completely washing their hands away which engaging a contractor to do the dirty work for them.

Protecting Yourself

These attacks highlight a key need for 2 factor authentication. Worthy to note that any security control put into place makes it harder, but not impossible for attackers to get through.  The Dark Basin attacks runs on the premise that the victims did not secure their Google account with 2FA, making it easy for the attacker to use their ill gotten credentials to gain access.


  1. Norman Hangover Report –
  2. Citizen Labs – Dark Basin –
  3. EFF phishing attempts –


Sg Buloh hospital – Jan 2020 case study


The Malay Mail reported that Sungai Buloh hospital (SBH) was recently hit with IT failures. Sg Buloh hospital is quite well known to the denizens of Klang Valley, being a governmental hospital of choice to many. I personally find the service is very good, doctors are friendly, professional and I don’t spend much time waiting as the processes are quite efficient.

The news report highlights difficulties in retrieving medical and investigation reports of patients. This problem was pinned at the hospital using Windows XP as the Operating System. Reports also mention that the hospital main servers were down for sometime. It was also quoted that the issue also stemmed due to limited storage space due to Windows XP OS limitation.

Some facts for consideration in this article, which to me sounded odd, but plausible. Lack of clarity on the matter, unfortunately fuels to the speculation (of which some are discussed in this article)

What about Windows XP?

Firstly the use of Windows XP. Microsoft positioned Windows XP as an end-user operating system, targeted to be installed on desktops and laptops. It was never meant to be used in server environments, though I have noticed small organization turns a desktop into a file share for the organization. The OS was never meant to be a server platform (though it can be at a minuscule deployment), and surely not for a hospital the size of SBH.

Microsoft declared Windows XP obsolete starting April 8, 2014. That means from the EOL (end-of-life) date, there will be no support. Meaning, if there is a bug or vulnerability in the OS, it will not be fixed. This also implies that the software ecosystem, such as anti-virus, endpoint protection and other critical software will also not made available as the efforts to maintain the software will be focused towards supported operating system. So, not only you have an OS that doesn’t have any updates, you also lose the updates for other software that runs on that ecosystem. Essentially a ticking time bombs.

Without any further information, one of the points of issues faced by SBH was the server failure. This can potentially be attributed to the use of Windows XP as the server OS (god forbid, but based on experience can/may happen), or a general failure at the server, be it at the OS, Application or Hardware level (may even be network, the server might be running on a 10Mbps network card, supporting the whole hospital). Again, without any further details, one can only speculate on this matter.

Another interesting point to note that the report also pointed out that Windows XP has a storage limitation. A quick check shows that the first limitation is at the memory level, and its due to the 32bit architecture. There are also issues at the file system level, as XP primarily supports FAT (File Allocation Table) format, which has a hard limit of 32GB. Worth to note that Microsoft also released a version of Windows XP for 64 bit. Windows XP supports FAT32, however the format tool natively supplied with XP doesn’t allow creation of FAT32 partitions (Primer: FAT/FAT32 creates an index of where files are located, based on free space available, which gives the OS a location of the files in the disk).

Software Obsolescence

Software obsolescence isn’t new, but its worth revisiting to understand how it can contribute to this situation.

When a software companies declare that a product is at End-Of-Life, its a statement to inform customers that they will no longer support that product, and that the customer should opt for newer software. While at face value it looks pretty simple, the impact is far reaching.

Hardware Compatibility

Firstly, upgrading an OS requires the hardware the be compatible. This means, if my father had a PC that he uses at home, he needs to first check if the PC can be upgraded. At times, the change in the OS can be drastic. Meaning, that the PC may not be compatible due to several reasons. For example, moving to a new PC hardware because the processor and it’s architecture is also obsolete. In this case, a new OS may only support 64bit architecture and may not have backwards compatibility to a 32 bit architecture. Windows XP supported both 32 bit and 64 bit architecture, paving way for the move from 32bit hardware to 64 bit hardware, which offers better scalability and flexibility. While this implies the link between OS and hardware, some subtle changes at software may have similar effect. Apple introduced MacOS Catalina which effectively prevented running on 32bit applications on it, making it a pure 64bit based Operating System. Many users reported the issue due to application availability, and some users ended up reverting back to MacOS Mojave.

Secondly, while the hardware becomes a factor to look in, another point to consider is the hardware compatibility towards the OS, from the point of drivers. Drivers are software that allows the OS to “talk” to the hardware. Without the drivers, the hardware remains useless. I remember the time when we had a spectrum analyzer in one of my previous roles, and the spectrum analyzer on worked on Windows 3.11 for Workgroups due to limited driver support. We couldn’t move it to the latest OS as the manufacturer had stopped support, and their recommendation was to spend equal amount of money to get a new set of equipments, that had the latest software and drivers. The organization, of course, chose the path of least expenditure, had to isolate the PC so that it was only a single use, making it a standalone independent analyzer. I still remember pulling the data off the analyzer using floppy disk. The same can be said to medical equipments. Imagine an MRI that was build using Windows XP as it’s operating system, cannot be migrated as the hardware support is no more there. No “sane” hospital would buy a new MRI machine just because the OS is outdated. As some IT experts will tell you – “If It ain’t broken, don’t fix it…”. The whole security industry was built around managing security risks, mind you.

Thirdly, organizations do not actively manage obsolescence. This is because obsolescence is an expensive affair. A cost conscious organization would do its best to “sweat its assets” and make their investments live past zero net book value. It creates additional cost to business when it comes to obsolescence management, and we potentially see it happen previously at the MAHB issue.  There is cost in replacing equipments, there is cost in migrating the data as well as time and resources required to carry out the project, not to mention training to all those involved so that they know how to use the new system (which may come with new UI/UX and workflow). It’s a very daunting affair, and one that has far reaching effect throughout the organization.

Software Dependencies

We depend on a number of software for our systems to work. Having a computer and the OS alone doesn’t do much, business runs based on its Line-Of-Business applications. Example, ERP (Enterprise Resource Planning), CRM (Customer Relationship Management) and many more. When the OS becomes deprecated, software developers also stop developing functionality for a now defunct OS, moving their codebase to a new platform. This means the code, depending on the extensivity of the change, may render the old code useless. Some platforms provide some degree of cross version support, but that’s usually limited, more of in favor of newer platforms. Remember that the same support required by the user is also required by the software developers to build their code on. There are some cross platform frameworks available, but even these frameworks may stop supporting older, deprecated OS.

The OS also comes with SDK (Software Development Kit), crucial for software development teams to harness the power of the OS. Just like how the OS gets deprecated, so will the platform SDK’s be.

Moving Forward

I discussed technology debt in greater details before, and it seems to be a recurring theme in large organizations in Malaysia. Obsolescence is a huge debt, which most organizations overlook, eventually coming back to haunt them. It’s not a discussion that any CEO/CFO would ever like to have, especially when the cost balloons and creates a huge dent in the balance sheets. The same can be seen in Governmental departments, where focus on maximizing the taxpayers money can be seen as a prudent outcome of well oiled administration.

Non-technology organization fail to grapple the complexities of technology. It took us long enough to start trusting automation and computing, and now this becomes another headache to manage. Some organizations even opt not to capitalize on the computing/internet era, effectively creating a barrier to efficiency and economies of scale (when it comes to data/records management). Having focus and attention to IT helps to alleviate such issues.

Most organizations establish an IT Steering Committee, comprising of senior leadership team to address such risks. Strategic discussion on project prioritization, maximizing annual budgets and looking at technology risks becomes a staple periodic discussion to look at IT and it’s associated risks/debts.

If at all, the business is faced up to the wall with no option but to manage its business with the existing infrastructure, there are some mitigation’s that can be applied. Ideally these systems should be run in isolation (similar to the spectrum analyzer case I spoke earlier). If it has to be networked due to the nature of the application, thorough backup and restoration procedures need to be established, run and periodically tested. I’ve seen organizations that pride on doing backups, but had never tested their backups, nor even tried to restore the backup (there’s a reason why those systems are called backup software and not restore software). Having a recovery plan helps, but the plan is as good as being tested periodically. Worst case scenario needs to be tested, break glass type procedures, and how business can run, in the event that complete failure is imminent. All else fails, start saving for a new system, maybe look at cloud as an alternative?


  1. Malay Mail (22 Jan 2020)-
  2. Microsoft XP EOL statement –
  3. MAHB Case Study –
  4. Technology Debt –

IT vs Cyber Security – Technology Debt

Where are we today?

Almost on a daily basis, we are bombarded with news of cyber attacks, breaches, data leaks and more. It’s as if cyber related issues are becoming a norm, so much so someone was quoted saying “ There are 2 types of organization; the ones that has been breached, and the ones that have yet to be”. As such, all organizations are putting emphasis on spending for continuity, and one question gets asked quite frequently. How much is enough? Is there a magical percentage that a CEO needs to consider as part of healthy spending to ensure that the safeguards are sufficient to manage today and tomorrow’s risk?

While there are research done on average spend buy organization, that is not an accurate reflection of what a particular organization’s spend pattern for protection of its assets. This article aims to demystify the topic on technology debt, using Security as a factor,  in order to identify right-spending for an organization. Technology debt is just one of the consideration to put into place when evaluating tech spend vs. security spend as a consideration.

What is a technology debt? 

A debt is defined as owing. When someone borrows money, they are obliged to return in (in most instances with an interest). A technology debt concept is no different than a conventional debt, however it is in the form of considerations, protection and governance aspect in rolling out current and new technology.

The concept of interest in technology debt is the occurrence of an event which creates an additional burden to the organization. Example, a cyber breach causes additional overheads from manpower utilization, engagement of relevant third party for services such as recovery, forensics as well as additional expenditure incurred.

Does Technology incur a debt? How does it work?

To illustrate technology debt, there will be 3 examples on how technology debt is incurred.

Scenario 1

An end user procures a computer for home use. He gets the operating system installed and starts using. He/She finds the computer very useful and engaging, starts using it for not just work/assignments, but also for personal content consumption such as videos, websites and even social media. One day, the user encounters a phishing email, which leads to downloading an attachment which infects the user with a ransomware. As his work is important and needs to be sent to the customer, the user ends up paying the ransom.

For this case, the tech debt was incurred at the point of starting the use the computer. The debt was to ensure that the computer was secured, had the necessary protection in place, such as endpoint protection and phishing alert. Because the debt as been incurred, the user ends up paying with interest, i.e. the ransom in order to retrieve the data.

Question: does the debt end here? Yes and no. While the ransom is paid (interest), the debt (principal) is still there. The debt goes away when the user secures his endpoint/laptop/machine and removes the “debt” altogether.

Scenario 2

A hardware store has purchased a Point-of-Sale (POS) terminal for use, primarily to ensure sales tax calculations are done and the reports made available for submission to the authorities. There is a thermal printer to print out the receipts, with the computed tax value as per regulations. A barcode scanner is attached to make it easy to input item code for the data capture during checkout. It became very convenient, so much so that even the inventory was managed effectively. Life seems to have been easier, thanks to the new technology. The POS came with 1TB hard drive, which makes it almost impossible to fill it up.

One day, for some unfortunate reason, the hard drive in the Point-of-Sale machine crashed. This resulted in some inconvenience as the the items have to be manually computed. Because of the convenience of the POS system, the prices are no longer printed as the reliance is towards product barcode. A manual list with the prices had to be derived after calling the vendors for price confirmation. What made it worse? The taxation department decides to show up for an audit, demanding to see the taxation report that was suppose to be produced adhoc as part of system requirement for taxation.

The technology debt in this case is the inability to backup and restore the system. While the reliance of the system is good, the debt (backup/recovery) had been incurred, and the user ended up paying interest (fines due to non compliance, additional recovery services, manual process institution, time wastage).

Scenario 3

A mobile app development firm has purchased a server  to store their source codes. The server is backed up daily using DVD and a copy is kept in a separate site. The server is configured with detailed access control list to ensure only the right people have access to the right set of codes.

A disgruntled employee decided to take matters into his/her own hands and deleted portion of the codes on the day he/she was leaving. The manager discovered the issue when reviewing the CI/CD logs during build failure and found files missing. Upon inspecting the version control software, identified the malicious action that has taken place. The manager proceeded to recover the part of the tree that was lost, and compared it against the backup that was kept to ensure that the changes made were consistent.

This case shows zero debt scenario. While deploying the solution, the IT team took into consideration requirements for backup, audit logs and continuity plan. When a potential “interest” scenario came up, because the debt was zero, there was none/minimal impact to the organization.

How does tech debt influence budgeting?

As technology gets deployed, as illustrated above, debt starts coming in. In some organizations, the debt is addressed up front as technology is deployed to avoid interest. Some organizations pan out the debt over time, in hopes that the interest will never come up.

How does this influence budgeting? The budget to manage security will include ensuring that the debt is being addressed timely. For organizations that has incurred debt, in order to zeroize the debt, expenditure needs to be done. As budgets are usually one line item for an organization, this then is seen in the percentage split between IT spend vs security spend.

Hence, for some organization which heavy tech debt, the budget will be more towards resolving the debt rather than expansion of IT. The percentage split will be skewed as the debt now influences the spend percentage.

Another reason why the spend will be skewed is when the interest come into play. Due to an incident, the interest becomes mature and payable. This creates additional expenditure which eats up into the budget. Post incident usually sees organization putting more emphasis into governance and control, almost having a blank cheque to show commitment, including in most instances, hiring a CISO that reports directly to CEO and Board.

The result, difference in spend percentage compared to overall budget based on level of debt resolution, depending on the state of the organization. Mature organization depends on resolving debt as the technology is incorporated, while other play catch up, due to business and budget limitation. What’s important is to be mindful that the debt may spring an interest at any time, causing organization to end up spending more. Delayed investment may result in heightened expenditure.

While the scenarios presented above may be simplistic, it is worth remembering that technology debt is often multi-dimensional and require an in-depth study to ascertain the respective areas of protection required. In the future article, we can discuss about this aspect of multi-dimensional tech debt and how to look at resolving the debt and preventing interest.

Moving forward

The crux of this article was to make a clear distinction between why different organization had different budget spend split. Though a baseline of spend helps CEOs identify whether the spend is healthy, understanding the technology debt help to justify why the spend needs to be more for some organization. While most organizations look at analyst report on average security spend, it is wise to ensure that technology debt is kept at check to ensure lack of interest popping up.

Perhaps if there is enough interest, then I can write up on identifying and resolving technology debt.

Information Mismanagement – the need for proper Information Security

At this day and age, it is difficult NOT to automate/computerise your business/data.
Your receipts are part of an elaborate data capture/retention/warehouse infrastructure which constantly crunches numbers, creating meaningful information in a vast cloud of networks, systems and storage. As such, one cannot run away from the responsibilities of protecting that data, which is key to any business in this modern age.

It is nearly impossible to operate a business in total isolation. One might say that he is a petty trader and does not need much information management. Well, you might get into trouble if your books are not in order, your stocks mismanaged, your payments unmet, and your cash mismanaged. You can run foul of your business, or even being chased by the tax collector.

I’ve seen most SME organizations tend to have very small IT outfit, and treat everything as part of the IT responsibility. The reality is, the web designer you hired, may be able to fix some common IT issues, but will not be able to tell you the real risks of information mismanagement. Your organization gets hit by a worm/virus infection, and you invest on some anti virus solution. Your website gets hacked, you just reinstall the OS. After a while, you realize that your competitors seem to know your every move, and you feel helpless trying to move your business further. It can be convenient to blame the IT Guy a.k.a Programmer a.k.a Security Guy.

Then comes the crude Information Security program. You hire someone whose heard of information security, put him way down the food chain (or the reporting hierarchy) and expect everything to be secure. The person comes with standard kit approach. Have firewalls, install anti virus. Spend a little, and get more maybe? Sure, that sounds reasonable. But guess what? You still get attacked, you blame your security vendor and eventually fire your security guy. Again, doesn’t sound that workable, right?

You grow further, having a team, but still buried under the food chain. You have people advising you at the project level on your implementations and do periodic reviews/audits. Sounds good right? But here’s the problem. Projects have the word COST tied to it. And security is a line item thats “nice to have“. So when push comes to shove, the line item called security gets pushed aside because the project must go on, at break-neck price. Even before the team can say anything, their own boss muffles their voice. Risk doesn’t get documented, easily swept under the carpet. (Sounds familiar?)

You reach a stumbling block where things keep failing. You start wondering whether is it the people? the process? What gives?

Herein the problem lies in implementing Information Security in an organization. Depending on the goal of the organization and the governance level of the organization, that’s how successful the Information Security program will be.

As a CEO/Board of Director, the governance determinant of Information Security needs to come as a mandate for corporate governance. The CEO/Board of Director needs to agree that Information Security is an agenda for review (either as a line item by itself, or as part of Audit Committee Review, or as Enterprise Risk Management review). Establishing a clear escalation process to the Board provides visibility and accountability of the company’s status, allows the Directors to have clearer view of the organization. Besides that, the Board is assured that the organization is in compliance with information security/privacy laws that may govern the business. The CEO will be accountable at the company level to ensure that the Information Security program is running and conducts reviews and ensures that escalation reports are discussed and closed timely. Key message here, visibility and reporting.

CEO also has many other functions, so this particular function then goes down to CSO/CISO. CSO (Chief Security Officer) will encompass the 2 large security domains, namely physical & information security. Whereas CISO (Chief Information Security Officer) is responsible for Information Security controls & governance. When establishing the hierarchy, position and reporting visibility also needs to be thought through. The reporting role (both official and unofficial) will ensure that the subject matter gets right attention. In highly governed environment, CISO/CSO reports directly to the COO/CEO level, and has a reporting requirements to the Board of Directors. Otherwise CISO function is absorbed within the Audit/Assurance structure.  In a slightly less governed environment, the CISO/CSO reports to a Head under the COO/CEO level (usually under the CIO/CTO reporting line). In other organizations, the CISO role is just a manager role within the large IT/Technology enclave. Key message here: reporting structure and empowerment.

The success of information management in any organization depends on how well information is governed. Process and policy comes into play. Having a well defined policy (using standards based policy like ISO 27002:2005 as a baseline helps to ensure that you’ve got all your bases covered. But having policies alone does not help. Policies needs to be translated into standards, and guidelines and then woven into the fabric of everyday process. The enhancement of these processes should help in improving the process, while carefully ensuring that it does not disrupt business due to unnecessary red tapes or throwing the process into a state of limbo. Take time to get the policies reviewed at all levels of organization, that helps you to get buy in from everyone. Policies are living documents, so be prepared to time review processes and get the documents to be approved by the right levels (usually CEO). Review quantum should be kept at one year. Have the ability to enforce immediate new policy requirements (due to urgent business needs) without having to do a full review, as this would enable immediate steps taken to prevent further issues/damage, but be prudent with this ability. Key message here: properly defined policy which can be adopted into everyday processes.

The structure of an infosec team would make a difference in how the organization needs are managed. Understand roles that other department plays, such as Audit as they would be performing some of the functions. Having 2 divisions performing the same function is ridiculous, you might as well empower the right divisions to manage the right responsibilities. Clearly state boundaries (use RACI charts) of each team, identify their abilities and functions. Even within the infosec team, you can further structure it. The operational aspect of information security can remain with the operations team, doing the day-to-day operational tasks, whereas the more strategic/tactical roles can reside in a different hierarchy. Key word here: check and balance, even within information security.

Lastly, the organization itself needs to move as a unit. In some organizations, information security is often perceived as a stumbling block. You’d probably hear more NO’s than YES, or more grouses than actual solutions. In those cases, clearly the organization objectives are overshadowed by individual preference. Becoming the solution provider goes a long way in building rapport and getting things done. If you get cold-storage, then you will not move anywhere, nor will you get the right level of participation to see your goals through. Information Security goals must tie back to the overall organization roles. In cases where the book doesn’t work, rationale mind comes into importance. Establish an exemption process which is a catch all/release all mechanism, but at the same time ensure that it’s not easily abused. Hence reporting structure and responsibility needs to be clearly established. Key message: TEAMWORK.

Links: Twitter runs foul of FTC