There are many developers who are not presently active on a Ruby on Rails
project who nonetheless have a vulnerable Rails application running on
localhost:3000. If they do, eventually, their local machine will be
compromised. (Any page on the Internet which serves Javascript can, currently,
root your Macbook if it is running an out-of-date Rails on it. No, it
does not matter that the Internet can’t connect to your
localhost:3000, because your browser can, and your browser will follow
the attacker’s instructions to do so. It will probably be possible to
eventually do this with an IMG tag, which means any webpage that can
contain a user-supplied cat photo could ALSO contain a user-supplied
remote code execution.)
That reminded me of an incredible presentation WhiteHat did back in 2007 on cracking intranets. Slides[1] are still around, though I couldn't readily find the video.
Yep. localhost:3000 is only the most obvious guess you could make, too. You could try redmine:3000 and see who that worked on, or 192.168.[enumerate all IPs], or the top 1,000 host names, or use a Javascript port scanner, or... yeah, lots of bad stuff. (I thought getting into that rabbit hole would make a long and convoluted post even longer. Suffice it to say the world is a grimmer and more dangerous place than we thought it was.)
In addition to common port numbers and stuff like redmine, their tipoffs include looking for Rails-style session cookies, and HTTP response headers emitted by Rails or support machinery. These include "X-Rack-Cache:" and the "X-Powered-By:" header that Phusion Passenger tosses in even if you've configured Apache itself to leave version numbers and component identifiers out of the response. (I'm not sure there's any better way to suppress this stuff than adding mod_headers to the Apache config and using "Header unset")
Note: From a sysadmin standpoint http://localhost:3000 commonly refers to http://127.0.0.1:3000. When running "rails server" locally in development mode, you actually get http://0.0.0.0:3000. These are not the same! 127.0.0.1 means that "rails server" can only be accessed from your local machine, where 0.0.0.0 means, it can be accessed on any address your computer is listening on. If you are on a local intranet, say at the office, then you probably have a 127.0.0.1 and 192.168.x.x interface, then everyone can access it via 192.168.x.x, or god forbid a public IP ;)
Again, even if your development box is being physically protected by the Swiss guard with a firewall that sprung from Donald Knuths' forehead with the River Styx separating it from all inbound connection attempts, it won't even matter, because you run a browser on your development box, that browser can always connect to your development box, and that browser can be instructed to pass malicious input to your development box if you do innocuous things with it like e.g. viewing web pages on the public Internet.
Yeah, I get it. I guess I'm making an additional point, that anyone can have direct access to the development environment via any address your machine is listing on -- not just localhost.
Other than ease of setup, I've never understood why you wouldn't develop in the same environment as what you're running in production. Setting up a vm is trivial and allows you to easily open/close access to your application as needed.
There is also a lot less headaches once you've decided to move it into production.
I don't think this would be any more secure if you were to enable networking on the VM to allow requests from the host machine, which seems like common behavior so that the developer can access the webapp running on the VM from a browser on the host. Or are people developing inside a VM and then testing with a browser on the VM?
You can set up a cloud VM (on Rackspace for example), and then set up your VM's firewall (iptables) to only allowing connections from your test machines (could be your local IP, or the IP address of the test machines from Browserstack or Sauce.) This allows you to keep your dev/staging/prod environments in sync (and allows you to do things like blueprint/image your dev setup to build it elsewhere), decouple your dev from your staging/prod, and allow you to develop from anywhere without needing to carry around the same laptop or rebuild your dev environment on another box - particularly if you use an intermediate system with a static IP so you can reconfigure your firewall whenever needed.
You might be thinking of preventing Javascript on host X from sending XMLHttpRequests to host Y. That will not prevent Javascript on host X from adding a form to the web page and having it post to host Y with arbitrary content, or from having an IMG tag on host X attempt to load (via a GET) a URL on host Y (assuming someone finds a pathway that works via GET requests for these or related vulnerabilities).
afaik you can't use cross site requests to exploit either the xml bug or the json bug without also exploiting a browser or plugin bug. both issues depend on setting a request header and you are not allowed to do this in the browser security model. but it sucks that CSRF bug becomes RCE bug :(
i actually lied :) there is #from_xml so if you were doing Hash.from_xml(params[:trololol]) or Post.from_xml(params[:lols]) then you would be vulnerable to localhost:3000 attack. but I don't think there is generic attack it would have to be application specific.
[1]: https://www.whitehatsec.com/assets/presentations/blackhatusa...