Alexandr Wang took part in Santa Monica, California, on April 5, 2025 in Barker Hangar in Barker Hangar on April 5, 2025. Credit – Taylor Hill/Filmmagic – Small
OOn June 12, Alexandr Wang resigned as CEO of Scale to pursue his most ambitious moon shot: Build more intelligently than human AI as head of the new division “Superintelligence” from Meta. As part of his move, META will invest 14.3 billion US dollars for a minimum AI, but the real price is not its company – it is itself.
The 28 -year -old Wang is expected that META’s AI efforts will be urgent, which were plagued by delays and overwhelming performance this year. After the undisputed leader of the open weight KI, the US Tech giant, was overtaken by Chinese rivals such as Deepseek via popular benchmarks. Although Wang, who at 19, was missing from the MIT I was missing the academic chops of some of his colleagues, he offers both an insight into the types of data metas competitors to improve their AI systems, as well as in an incomparable ambition. According to reports, Google and Openai are depending on business with the Skala -KI compared to the Meta deal. The scale rejected a statement, but Interim CEO has emphasized that the company will continue to operate independently in a blog post.
Great goals are Wangs thing. At 24, he became the youngest, homemade billionaire in the world by building a yardstick into an important player that characterizes data for the giants of the artificial intelligence industry. “Ambition forms reality,” says one of the core values of Scale – a motto that has created Wang. This trip brought him admiration by Sam Altman, CEO of Openaai, who lived in Wang’s apartment for months during the pandemic.
But his relentless ambition came with compromises. He attributes the success of scale to the treatment of data as a “first -class problem”, but this focus has not always extended to the company’s army from over 240,000 contract workers, some of which were delayed, reduced or canceled after completing the tasks. Lucy Guo, the co -foundation of scale, but in 2018 after disagreements went with Wang, said it was one of her “clashes”.
“I thought:” We have to concentrate on ensuring that they are paid out on time “, while Wang was more concerned with growth, says Guo. Skala Ai stated that cases of late payment are extremely rare and are constantly improving.
The operations of this way of thinking increase this growth and costs. Superintelligent al “would make up the most precastious technological development since the atomic bomb,” said a political paper that Wang compiled in March with Eric Schmidt, the former CEO of Google, and Dan Hendrycks, the director of the Center of Ai Safety. Wang’s new role at Meta makes him an important decision -maker for this technology, which leaves no room for errors.
Time spoke to Wang in April before returning from Scale CEO. He discussed his leadership style, how prepared the United States for the “defects of AI” by AGI and AI.
This interview was condensed and processed out of clarity.
Your leadership style was described as very in-the-we-weeds. For example, it was reported that they would accept a 1-1 call with every new employee, even if the employees went into the hundreds. How did your view of the leadership develop, how scale has grown?
Leadership is a very diverse discipline, isn’t it? There are level 1 – Can you achieve the things that are right in front of you? Level two is: do the things you even do the right things? Do you point in the right direction? And then there are a lot of levels three, which is probably the most important thing – what is the culture of the organization? All of this kind of stuff.
I definitely think that my approach to leadership is one of the very high attention to detail, is very in the weapons, is quite concentrated, conveys a high degree of urgency and really tries to ensure that the organization moves as quickly and urgently on the critical problems as possible.
But also, how do you develop a healthy culture? How do they develop an organization in which people are used in positions where they can do their best work, and they learn and grow and grow in these environments. If you indicate a mission that is bigger than life, you have the ability to achieve things that are really great.
Since a trip to China in 2018, they have been pronounced on the threat of China’s AI ambitions. After Deepseek in particular, this view has become much more dominant in Washington. Do you have other income in relation to AI development that could now be a kind of edge, but in about five years of mainstream?
I think the agent world – one in which companies and governments are increasingly making their economic activity with agents; that people are more and more like managers and supervisors of these agents; Where we start moving and deriving more economic activities to agents. This is certainly the future, and how we as a society go through this transition with minimum disorders is very, very not trivial.
I think it definitely sounds scary when you talk about it, and I think it’s like an indication that it will not be something to achieve it very easily or is very easy to do. My belief is, I think there are a number of things that we have to build that we have to do correctly to ensure that this transition is smooth.
I think there is a lot of excitement and energy for this type of agent world. And we think it touches every facet of our world. This is how companies become agent companies. The governments become agent governments. The warfare becomes agent war. It will cut deep into everything we do, and there are a few key pieces, both infrastructure that need to be built up, as well as important political decisions and important decisions [about] How it is implemented in the economy, all of which are quite critical.
What is your assessment of how prepared and how seriously the US government takes the possibility of an “agi” [artificial general intelligence]?
I think Ai is very, very good for the administration, and I think there is a lot to judge: How high is the progress rate? How quickly do we achieve what most people call AGI? Slow time frame, faster time frame? In the case in which it is a faster time frame, what are the right things to repair? I think these are important conversations.
If you go to Vice President JD Vances from the Paris Ai Action Summit, he expressly speaks about the concept that the current administration focuses on the American worker and that you make sure that the AI is advantageous for the American worker.
I think while the AI progresses – I think the industry moves at an exciting speed – people will take note of and take measures.
A job that appears ripe for disorders is the data annotation itself. We have seen internal AI models used for the caption of the DataSet Openai Sora, and at the same time the argumentation models are trained via synthetic self-employment data for defined challenges. Do you think these trends are a risk of a disturbance to scale the AI data annexation?
I actually think it’s exactly the opposite. If you look at the growth of AI -related jobs to the contribution to AI data sets -there are many words for it, but we call them “contributors” -it has grown exponentially over time. There are a lot of conversations about whether the models get better, the work disappears. The reality is that work continues to grow over the year and you can see this in our growth.
So my expectation is actually, if you pull a line forward to an acting economy, more people are actually to be able to do what we would currently consider – that will be an increasingly large part of the economy.
Why could we not automate the AI data?
The automation of AI data work is a little tautology because the AI data work should make the models better. So if the models with the things for which they produced data were good, they would not need them at all. Basically, AI data focuses on the areas in which the models are poor. And since AI is used more and more places within the economy, we will only find any defects there.
You can step down and scold and the AI models seem to be really smart, but if you actually try to use them to do a number of important workflows in your job, you will find that this is pretty poor. And so I think that humanity as a society will never stop finding areas in which these models have to improve, and which drive a constant need for AI data work.
One of the contributions from Scale was to position itself as a technology company as well as a data company. How did you pull it off and fired this from the competition?
If you step back a big step, the AI progress is based on three pillars: data, calculate and algorithms. It became very clear that the data was one of the most important bottlenecks in this industry. Calculation and algorithms also became bottlenecks, but the data were right there with you.
I think there was no companies in front of the scale that treated data as a first -class problem that it really is. With the scale, one of the things we really did is dealt with data with the respect it deserves. We really tried to understand: “How do we solve this problem in the right way? How do we solve it in technical-forward-style?”
Once you have these three columns, you can create applications via the data and algorithms. And what we have built on a scale is the platform that initially underpins the data column for the entire industry. Then we also found that with this pillar we can build on the top and help companies and governments to build and use AI applications in addition to their incredible data. I think that really distinguished us.
Contact us at letters@time.com.