Google’s open source security measures can be meaningless. In a perfect world, it is

One of the biggest threats to enterprise cybersecurity is redesigned third party code and open source code, so you
I think Google’s sure open source software service will be a big help.

Think again.

Here is Google’s pitch: “Certain OSS Enterprise and public sector users of open source software enable them to easily incorporate the same OSS packages that Google uses in their own developer workflow. Packages collected by the confirmed OSS service are regularly scanned, analyzed, and phage-tested for vulnerabilities; Contains rich metadata including container / artifact analysis data; Made with Cloud Build with verifiable SLSA-compliant proof; Verifiably signed by Google; And distributed from an Artifact Registry, protected and protected by Google. “

This service may or may not be effective depending on the end user. For some companies – especially small and medium-sized businesses – this can be valuable for small operations without a dedicated IT team. But for larger enterprises, things are very different.

Like all things cyber security, we must start with faith. IT Should trust Google’s efforts here? First, we have already approved a number of malware-laden or otherwise problematic apps for the Google App Store, Google Play. (To be fair, it’s just as bad in Apple’s App Store.)

That makes the point. It is extremely difficult to find any security issues with the code. No one is going to do it perfectly, and Google (and Apple) doesn’t have a business model to properly stuff those territories. So they rely on automation, which is spot on.

Don’t get me wrong. That’s the decent thing to do, and it should end there. But the key enterprise IT question is whether this program will allow them to do something different. I argue that it will not.

Every single IT code – especially open source – needs to be scanned for any problems. This may include intentional problems, such as malware, ransomware, backdoor or other malicious things. However, it will also include accidental holes. It is difficult to completely fight against typo or slow coding.

This is not to say that coders / programmers can justify not double-checking the code from this Google program. And no, the knowledge that Google uses it internally should not make any CIO, IT director or CISO feel warm and vague.

This brings with it a big problem: all initiatives Should Check every line of code and double check that they access from somewhere else – no exceptions. That is to say, reality is combined with ideals.

I discussed the Google move with Chris Visopal, one of the founders of software security company Veracode, and he made some mandatory points. There are several disconnections in the issue, one between developer / coder and IT management, the other between IT management (CIO) and security management (CISO).

For the first disconnection, IT may issue as many policy announcements as it wishes. If the field developers choose to ignore these commands, it comes down to enforcement. Every line-of-business executive is breathing down the neck of IT, demanding everything immediately – and those people generate revenue, which means they will probably win any battle with the CFO or CEO – hard to enforce.

It assumes that IT has actually issued instructions that the external code should be double-checked to see if the code is naughty and beautiful. This is the second conflict: CISO, CSO and CRO all want code-checking to happen regularly, while IT directors and CIOs may take a less aggressive stance.

There is a risk from Google’s move, which could be described as a false sense of security. There will be a temptation from some people in IT to use Google’s offer as an opportunity to succumb to the pressure of LOB and to give up cyber security checks on any aspect of Google’s Assured Program. To be blunt, it means deciding to trust the Google team to capture everything completely (and blindly).

I can’t imagine a Fortune 1000 (or their personally-owned counterpart) IT executive believing and acting that way. But if they are pressured to step down quickly from business leaders, it is a relatively mouth-watering excuse to do what they can’t.

This forces us to deal with some uncomfortable situations. Is Google Assured more secure than unchecked code? Absolutely. Would it be perfect? Of course not. Therefore, prudence indicates that what was done before IT should be continued and all code should be checked. This makes Google’s efforts for the enterprise irrelevant.

But it is not so easy and it is never. The viceroy argues that many initiatives do not simply examine what they should do. If this is true – and I sadly admit it is – then Google Assured is an improvement over what we had last month.

In other words, if you’re already cutting corners and planning to continue, Google’s move could be a good thing. This is irrelevant if you are strict about code-testing.

Wysopal further argues that Google’s scale is too small to help much, regardless of an enterprise’s code-checking method. “This project needs to scale 10-fold to make a big difference,” Wysopal said.

What those IT leaders will do No. Strictly check the code? “They wait for someone else to find the weakness (and then fix it). The enterprise is a dumb consumer of open source. If a vulnerability is found by someone else, they want a system where they can update, “said Visopal. As soon as the app starts to slow down security things, it bypasses. “

Google’s move is good news for those who have taken a much safer corner. How many institutions are there? It’s controversial, but I’m afraid Wysopal could be more accurate than anyone would like to admit.

Copyright © 2022 IDG Communications, Inc.

Leave a Reply

Your email address will not be published.