For web scrapping, a dynamically typed language makes handling json requests and response A LOT easier, so that eliminates golang, Java, C, C++ and C#.
Error handling in node.js is a complete joke, and so is the callback hell in its async model. PHP was designed for embedding a web app into html, sure you can try to do scrapping with it by bringing in libcurl, but it will still be a painful experience. Perl works, but these days python does everything perl can do in a simpler manner and everyone's gyrating towards there.
Scrapping is just a joy to do with python. The requests library is easy to use, there is beautifulsoup if you need to play with forms, regex is available if you need to scrape html instead of json, and for advanced stuff where you need to simulate user inputs there is selenium. And to run multiple concurrent requests, use gevent with monkeypatching which makes all the async handling transparent without having to go through node.js callback hell.