Running Your Own Private Ethereum Network

If you’re looking to get your feet wet in Ethereum or test out a new contract that you’re developing, you may choose to run your own private network. This can be done rather than using one of Ethereum’s public testnets. By running your own private network, you can maintain total control over the network and create specific test conditions that you may find useful. You also don’t risk having others discover your new contract before you’re ready to announce it to the world.

Setting up and running your own private network is relatively easy. I present two popular options, each with their own pros and cons:

  • Geth is a popular fully fledged client and is able to do this out of the box. Setup is required, but it’s fairly straight forward.
  • TestRPC simulates an in-memory blockchain and provides a HTTP RPC server. It is extremely fast and easy to setup and tear down. However, as of today TestRPC does not implement every Ethereum API. These limitations are apparent, for example, when trying to use Mist to send personal transactions (see bug report here).


Google Account Security and Number Portability

By now, you may have read this story about someone having $8,000 worth of Bitcoin stolen due to a social engineering attack on their Verizon account. This was an unfortunate event and an urgent reminder that SMS-based 2FA isn’t secure. When you allow a second factor of authentication to occur over SMS, the proof isn’t that you have your phone. Rather, it’s that you are able to receive SMS messages sent to a certain number. The problem with this as a means of authentication can be summed up in two words: number portability. If an attacker can social engineer your mobile provider, they can port your number over to their own account and your 2FA provider would never know the difference.

This got me thinking. How secure is my Google account, even when locked down with 2FA via the Google Authenticator application? Would I be able to withstand an attack similar to the one that Cody Brown suffered?

As it turned out, I wouldn’t. A serious security concern appeared when I went through the account recovery flow for my Google account. The following events illustrate this:

  1. Start the login process on by entering my username. Click “Forgot password?”
  2. Be asked to “enter the last password you remember. Click “Try a different question.”
  3. Be asked to “enter a verification code”. Click “Try a different question.”
  4. Be asked to “get a verification code by text message at: (***) ***-**-XX.Since my cell phone number appears on my business cards and is public information as far as I’m concerned, this would hardly deter an attacker. By taking advantage of number portability, an attacker could steal my number.
  5. Be asked to “confirm the phone number you provided in your security settings: (***) ***-**XX.” Since I just received a text sent to this number, I obviously know this.
  6. Answer a security question of “What is my father’s middle name?” Skipping this forced me to specify the month and year my account was created. While the first security question is terrible, the second option isn’t all that much better as there are a very limited number of possible answers.
  7. Change my password.
  8. Login to my account.

That’s right. Despite using the Google authenticator application, I was able to effectively skip it and instead opt for receiving a text and answering a lame security question.

Now to be fair, Google discontinued security questions a while ago. However, they stick around in your account until you delete them. And that’s just one symptom of the problem here: Google’s account recovery flow falls back to other forms of verification that you may not even be aware of.

I get why Google designed the account recovery workflow to be this way. For the average user, getting access restored to their account may be more important than locking out adversaries. But for those of us who beg to differ, this can have disastrous consequences.

I urge you to review your 2-Step Verification and remove “Voice or text message” as an alternative second step, as well as any legacy credentials such as security questions. Only trust cryptographically secure 2FA. To prevent accidental lockout, store your 2FA recovery codes somewhere safe.

Your future self will thank you.

OSCP Certified

On December 1st, I took the Offensive Security Certified Professional (OSCP) exam and successfully earned my certification. For those unfamiliar with OSCP, it is a hands-on training course and certification offered by Offensive Security. The content it focuses on is immense; Everything from SQL injection to writing your own remote buffer overflow exploits is covered by the course e-book and videos. There is also lengthy coverage of how to properly enumerate hosts and take inventory of an entire network.


Client-Side Redis Attack Proof of Concept

Note: This issue is being discussed about a year late, as it was sitting forgotten in my blog post queue for some time. However, I have decided to post it now as it is still very much relevant. The attack explained below appears to still work on version 3.2.1 of Redis (tested on OS X and installed via brew). If the PoC fails and your inputrc file isn’t written to, it’s likely a directory permissions issue. Perhaps Redis is running as its own user, as it should?

The moral of the story is that even services on your own laptop that only listen on the loopback interface still need to be locked down. (more…)

Cross-Site Scripting via DOM-Based Open Redirects

Consider the following JavaScript application which clearly contains a DOM-based open redirect vulnerability:

As if this weren’t bad enough, this application is less obviously vulnerable to cross-site scripting. Consider what would happen if the window’s location were set to javascript:alert().


This is effectively the same thing as typing javascript:alert() into the navigation bar in your browser and hitting enter. This behavior is unexpected to me, because it’s something I wouldn’t think modern browsers would allow. And yet the latest versions of Google Chrome (50.0.2661.102) and Firefox (46.0.1) both do. I cannot think of a legitimate reason for window.location= to execute code.

In conclusion: Don’t forget to submit your DOM-based open redirect bugs as XSS bugs from now on. They tend to pay out more in bug bounty programs.

Parameter Tampering Attack on Twitter Web Intents

Twitter’s Web Intents allow visitors of a website to interact with content on Twitter without having to leave the website. This is done by means of a popup for desktop users, and native app handlers for iOS and Android users. This is the same platform powering the “tweet” and “follow” buttons you may see on webpages across the Internet.

I identified a parameter tampering vulnerability that in total affected all four web intent types. These vulnerabilities allowed an attacker to stage a Web Intent dialog with tampered parameters, which could then lead to a visitor following a Twitter user they didn’t intend to follow.

All four intent types were vulnerable: Following a user, liking a tweet, retweeting, and tweeting or replying to a tweet.


Regex Security Issues in Ruby

I see this kind of problem everywhere in the Ruby ecosystem, despite it being an old one.

Consider the regular expression /^https?:\/\/[\S]+$/:

So far so good. However, consider this:

This matches our regex because /^ matches the beginning of a line and $/ matches the end of one. This poses a very common misunderstanding of how Ruby regexes work. The impact of this varies from annoying unexpected input to cross-site scripting to remote code execution; it all depends on what is done with the input.

To properly match the beginning and end of a string, \A and \z should be used respectively.