The prediction marks a significant revision of previous estimations of the so-called technological singularity, when machine intelligence surpasses human intelligence and accelerates at an incomprehensible rate.
Noted futurist Ray Kurzweil previously pegged this superintelligence tipping point at around 2045, citing exponential advances in technologies like robotics, computers and AI.
Mr Musk, whose ventures include electric car maker Tesla and space firm SpaceX, said in an interview with The New York Times that current trends suggest AI could overtake humans by 2025.
The billionaire engineer, who also helped found the artificial intelligence research lab OpenAI in 2015, has consistently warned of the existential threat posed by advanced artificial intelligence in recent years. Despite this, he said he still feels that the issue is not properly understood.
“My assessment about why AI is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are. And this is hubris and obviously false,” he said.
"We’re headed toward a situation where AI is vastly smarter than humans and I think that time frame is less than five years from now. But that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird."
In 2016, Mr Musk said that humans risk being treated like house pets by artificial intelligence unless technology is developed that can connect brains to computers.
Shortly after making the remarks, Mr Musk announced a new brain-computer interface startup that is attempting to implant a brain chip using a "sewing machine-like device".
Neuralink will allow humans to compete with AI, according to Mr Musk, as well as help cure brain diseases, control mood and even let people "listen to music directly from our chips."
Both Mr Musk and Mr Kurzweil were among prominent artificial intelligence researchers to pledge support for stringent guidelines for the development of advanced AI.
An open letter published by the Future of Life Institute (FLI) in 2017 outlined a set of principles deemed necessary to avoid an out-of-control AI, as well as doomsday scenario involving lethal autonomous weapons.
"We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone's lives in coming years," the institute said at the time.