Introducing the Solidity Function Profiler

Static analyzers are good at detecting certain types of security vulnerabilities. However, one place that static analysis often falls short is in the detection of authorization bugs. This is because authorization tends to be a “business logic” problem. How would an analyzer know what functionality should be off-limits to normal users? One can infer based on semantics (looking for words like “admin”), but such clear-cut cases are rare.

A couple of days ago I wrote about Parity’s multi-sig contract vulnerability. Because there was nothing inherently wrong with the vulnerable functions, aside from the lack the authorization checks, it is unlikely that a static analyzer would have flagged these issues.

If I had to take a guess at the culprit behind this vulnerability getting missed, it would probably be a lack of effective manual code review. Manual code review is a tedious, time-consuming task, but it is often the only way to find certain types of bugs. In this particular case, a human looking at a list of the contract’s functions would have hopefully noticed several suspicious looking public functions.

As far as I know, there are no public tools for Solidity to profile a contract’s functions. That is why today I would like to release a tool called the Solidity Function Profiler.

The tool uses ConsenSys’ solidity parser library to generate an AST of the contract being analyzed. It then “walks” the AST and finds function declarations, taking note of what each function’s signature, visibility, return values, and modifiers are. Finally, it returns a human-consumable report. Being able to quickly gather this kind of information about a contract is very useful in understanding how it can be interacted with. My hope is that it will help prevent future vulnerabilities like the one exploited in the multi-sig contract attack.

You can find the tool here.

Parity Multi-Sig Contract Vulnerability

So this just happened. It’s late, but before heading to bed I wanted to quickly write-up a technical analysis of this one because it’s quite short.

One of the quickest ways to understand a vulnerability is to look at its patch if one is available. Let’s do just that. There are actually a couple of now closed pull requests related to the fix, but the very first one tells us the story behind this vulnerability. The diff can be found in Parity’s GitHub repository here.

For readers who aren’t familiar with the Solidity language, the added keywords at the end of the first two differing functions are visibility specifiers. Visibility specifiers dictate who is allowed to call specific functions, just as they do in other programming languages. Sometimes functions simply aren’t fit for public use, either because of security reasons or API design.

What’s does the internal visibility do that got added in the pull request? Consulting the Solidity docs, we find:

internal:
Those functions and state variables can only be accessed internally (i.e. from within the current contract or contracts deriving from it), without using this.

The crux of this vulnerability is that several privileged contract functions were left public.

Update: Another detail is that calls to the main contract were delegated to the vulnerable contract which acted as a library, making this issue a little bit harder to see. I would argue not by much though, especially considering that the main contract simply takes incoming calls and delegatecalls to the vulnerable library contract. The design is hard to miss.

The result was that anybody that knew the address of a vulnerable contract could call these functions and change the configurations of these contracts, including the list of contract owner addresses.

You may be wondering how these privileged functions were made public in the first place. The answer is actually in a lack of code. You see, unless otherwise specified, the visibility of a Solidity function is public.

When I started learning Solidity and came across this detail, I was surprised. Contract developers should be explicit about what functions are allowed to be called externally. This is akin to writing an API and having to explicitly set every function of your application to be private, unless you want them exposed to the Internet.

Needless to say, I think that this is a bad convention for a smart contract programming language.

EIP 214

EIP 214 introduces a new opcode to the EVM named staticcall. It is a variant of the call operation with an added security property: It allows your contract to call another contract while disallowing state changes. If the called contract does attempt to perform a state changing operation (such as modifying storage), an exception is thrown.

The staticcall operation is safer to use than call, because it guarantees that there won’t be any side-effects from calling another contract. This can be used to prevent reentrant attacks, in which a user tricks your contract into re-calling itself. The unexpected state it ends up in is then used to perform a nefarious action (i.e. withdraw more funds than should be allowed).

I’m excited to see this get rolled out. I think it will help developers write safer contracts.

Analyzing the ERC20 Short Address Attack

Back in April of 2017, the Golem Project published a blog post about the discovery a security bug affecting some exchanges such as Poloniex. According to the post, when certain exchanges processed transactions of ERC20 tokens, input validation was not being performed on account address length. The result was malformed input data being provided to the contract’s transfer function, and a subsequent underflow condition that manipulated the amount being sent. The impact was that an attacker could potentially rob an exchange account of tokens.

The attack explained by the Golem Project exemplifies a rather unique case, in which an exchange acts as both a client and a server. That is, the exchange is a server for users to buy tokens as well as a client to the Ethereum network. This differs from typical contract interaction in which a client uses the Ethereum network directly, and any transaction error would likely be the sole fault of the client and not a third-party. Luckily for the Golem Project, the vulnerability is not known to have ever been exploited. It has since been dubbed the “ERC20 short address attack.” (more…)

Running Your Own Private Ethereum Network

If you’re looking to get your feet wet in Ethereum or test out a new contract that you’re developing, you may choose to run your own private network. This can be done rather than using one of Ethereum’s public testnets. By running your own private network, you can maintain total control over the network and create specific test conditions that you may find useful. You also don’t risk having others discover your new contract before you’re ready to announce it to the world.

Setting up and running your own private network is relatively easy. I present two popular options, each with their own pros and cons:

  • Geth is a popular fully fledged client and is able to do this out of the box. Setup is required, but it’s fairly straight forward.
  • TestRPC simulates an in-memory blockchain and provides a HTTP RPC server. It is extremely fast and easy to setup and tear down. However, as of today TestRPC does not implement every Ethereum API. These limitations are apparent, for example, when trying to use Mist to send personal transactions (see bug report here).

(more…)

Google Account Security and Number Portability

By now, you may have read this story about someone having $8,000 worth of Bitcoin stolen due to a social engineering attack on their Verizon account. This was an unfortunate event and an urgent reminder that SMS-based 2FA isn’t secure. When you allow a second factor of authentication to occur over SMS, the proof isn’t that you have your phone. Rather, it’s that you are able to receive SMS messages sent to a certain number. The problem with this as a means of authentication can be summed up in two words: number portability. If an attacker can social engineer your mobile provider, they can port your number over to their own account and your 2FA provider would never know the difference.

This got me thinking. How secure is my Google account, even when locked down with 2FA via the Google Authenticator application? Would I be able to withstand an attack similar to the one that Cody Brown suffered?

As it turned out, I wouldn’t. A serious security concern appeared when I went through the account recovery flow for my Google account. The following events illustrate this:

  1. Start the login process on accounts.google.com by entering my username. Click “Forgot password?”
  2. Be asked to “enter the last password you remember. Click “Try a different question.”
  3. Be asked to “enter a verification code”. Click “Try a different question.”
  4. Be asked to “get a verification code by text message at: (***) ***-**-XX.Since my cell phone number appears on my business cards and is public information as far as I’m concerned, this would hardly deter an attacker. By taking advantage of number portability, an attacker could steal my number.
  5. Be asked to “confirm the phone number you provided in your security settings: (***) ***-**XX.”Since I just received a text sent to this number, I obviously know this.
  6. Answer a security question of “What is my father’s middle name?” Skipping this forced me to specify the month and year my account was created. While the first security question is terrible, the second option isn’t all that much better as there are a very limited number of possible answers.
  7. Change my password.
  8. Login to my account.

That’s right. Despite using the Google authenticator application, I was able to effectively skip it and instead opt for receiving a text and answering a lame security question.

Now to be fair, Google discontinued security questions a while ago. However, they stick around in your account until you delete them. And that’s just one symptom of the problem here: Google’s account recovery flow falls back to other forms of verification that you may not even be aware of.

I get why Google designed the account recovery workflow to be this way. For the average user, getting access restored to their account may be more important than locking out adversaries. But for those of us who beg to differ, this can have disastrous consequences.

I urge you to review your 2-Step Verification and remove “Voice or text message” as an alternative second step, as well as any legacy credentials such as security questions. Only trust cryptographically secure 2FA. To prevent accidental lockout, store your 2FA recovery codes somewhere safe.

Your future self will thank you.

OSCP Certified

On December 1st, I took the Offensive Security Certified Professional (OSCP) exam and successfully earned my certification. For those unfamiliar with OSCP, it is a hands-on training course and certification offered by Offensive Security. The content it focuses on is immense; Everything from SQL injection to writing your own remote buffer overflow exploits is covered by the course e-book and videos. There is also lengthy coverage of how to properly enumerate hosts and take inventory of an entire network.

(more…)

Client-Side Redis Attack Proof of Concept

Note: This issue is being discussed about a year late, as it was sitting forgotten in my blog post queue for some time. However, I have decided to post it now as it is still very much relevant. The attack explained below appears to still work on version 3.2.1 of Redis (tested on OS X and installed via brew). If the PoC fails and your inputrc file isn’t written to, it’s likely a directory permissions issue. Perhaps Redis is running as its own user, as it should?

The moral of the story is that even services on your own laptop that only listen on the loopback interface still need to be locked down. (more…)

Cross-Site Scripting via DOM-Based Open Redirects

Consider the following JavaScript application which clearly contains a DOM-based open redirect vulnerability:

As if this weren’t bad enough, this application is less obviously vulnerable to cross-site scripting. Consider what would happen if the window’s location were set to javascript:alert().

Screen-Shot-2016-06-18-at-10-21-09-AM-1

This is effectively the same thing as typing javascript:alert() into the navigation bar in your browser and hitting enter. This behavior is unexpected to me, because it’s something I wouldn’t think modern browsers would allow. And yet the latest versions of Google Chrome (50.0.2661.102) and Firefox (46.0.1) both do. I cannot think of a legitimate reason for window.location= to execute code.

In conclusion: Don’t forget to submit your DOM-based open redirect bugs as XSS bugs from now on. They tend to pay out more in bug bounty programs.