Today, I’m talking with Jennifer Scanlon, who is the CEO of UL Solutions. That’s Underwriters Laboratories – you know, the UL logo listed on all your electronics? That symbol means it’s been tested and found safe in a variety of ways. UL’s been around for 100 years. It started as a way for insurance companies to do fire and safety testing on electrical products just as electricity was coming into homes.
But now it’s everywhere, and it’s one of those companies we really like to poke at here on Decoder that’s basically hidden in plain sight — that logo is on everything. But scratch the surface and the business of UL is pretty complicated. There are a ton of cheap electronics on Amazon, and maybe people just care about price and not certifications. The company is also now trying to do safety testing for AI systems; it just rolled out a new standard called UL 3115, “a structured framework to evaluate AI-based products before and during deployment.” That kind of standard requires a lot of companies and regulators to buy in — and for there to be a way to even reliably safety test AI at all. And then there’s the structure of UL, which — well, you’ll see. It’s complicated.
Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.
But sure, structure and whatever, we’ll get there, but first, I had to ask Jennifer if she got to watch stuff explode in the testing labs. Because to me that seems like the best part of working for an organization that sets safety standards. A lot of stuff blows up in the labs, and you’ll hear Jennifer say her office often rattles because of it.
But there are other complications: Right at the tail end of the Biden administration, UL got tapped to be the lead administrator for a new Cyber Safety program that was supposed to set a standard for connected devices — the whole Internet of Things. But then the Trump administration came to power, and good old Brendan Carr has been coming up with reasons — which of course never actually get articulated to anyone — why any company related to China is somehow now a threat. That, apparently, includes UL, which of course has safety labs in China, since that’s where the electronics are made. So UL lost out on that deal. I asked Jennifer about it pretty directly, since that’s really a microcosm of pretty much everything going on with safety, tech, and China right now.
There’s a lot going on in this one; I love when we get to bring hidden systems to light. I think you’re going to like it.
Okay: UL CEO Jennifer Scanlon. Here we go.
This interview has been lightly edited for length and clarity.
Jennifer Scanlon, you’re the president and CEO of UL Solutions. Welcome to Decoder.
Thank you, Nilay. It’s such a pleasure to be here.
I’m excited to talk to you. Some of my favorite episodes are when we demystify a thing that everyone takes for granted and the UL logo is one of those things.
Absolutely. The UL mark is on billions of products, and yet everywhere I go, people look at me and say, “What exactly does UL do?”
Well, my understanding is that you just drop things off of cliffs and see if they explode. Is that your day-to-day?
We do have people who drop things off of cliffs and see if they explode, but really every single day, we have 15,000 employees around the world working for a safer world. They are testing, inspecting, and certifying products. They are also creating software to help our customers manage their risk and compliance environments.
You run a big testing facility. Describe some of the tests that are done and who gets to do them and what some of the wildest tests that you do are.
I always like to say we break things, we blow them up, we light them on fire. If you were to walk into our testing facility here in Northbrook, Illinois, in Europe, in China, in India, anywhere in the world, you’ll first see large electrical panels that are there charging and discharging products, batteries, and seeing what fails. Watching a lithium-ion battery the size of my thumb blow up is pretty terrifying. It’s amazing how wide that blast will go. So we do a lot of inherently unsafe things to test product safety.
My most favorite test, I wasn’t there, but I got to see pictures of it. We stacked two million soda pop cans in our large-scale fire testing warehouse and then dropped a lighted piece of paper in the middle to see what would happen. And to this day, I don’t know if we were testing the aluminum, the labels, the contents, but I do know the tests failed. They were supposed to cave in and kind of collapse upon themselves and they instead exploded, and it took a number of days to clean up the two million failed soda pop cans. That’s what we do. We protect our customers. They needed to know that what they thought was going to happen didn’t happen.
Oh no. What’s the most dangerous test that you’ve gotten to be there in person for?
Our hazardous location testing is in Northbrook and my office is right above it. Every once in a while you’ll feel a little shake. And you really think, how bad could it be that a lumineer in a combustible dust environment sparks? Well, if you think about that, you’re out on an oil rig, you’re out in some factory, a lot of lives could be lost. So while the test itself is well controlled, it really makes you think about the lives that are at stake with what we’re doing every single day.
Do you ever bail out of boring meetings and just go blow stuff up for fun? I would absolutely do that.
I don’t think the engineers would let me. But they do enjoy it when I come visit because I do ask a lot of questions and I’m always fascinated by the new things that we do.
I think you should ask them. I think they would let you. I’ve got to be honest with you. I know a few engineers. I think they might be like, “Yeah, we’ll set something up for you.”
Fifteen thousand employees, that’s a lot. The company started a long time ago as Underwriters Laboratories. Fire insurance companies needed to make sure electrical devices weren’t going to burn down houses and they could write fire insurance. Is that still the basis of the company? How does that work?
I like to say that the basis of our company was to address the safety of the technology of the day. And at the time, 1894, World’s Fair, on the edge of University of Chicago, where both you and I have a bit of history, the Underwriters’ Electrical Bureau brought our founder to Chicago to help do some primary scientific research on the safety of electricity to write standards about how that electricity should be used, both manufactured and embedded into products and installed and safely used in buildings, and then perform public advocacy, educating people on the new technologies.
Fast forward, certainly with the electrification of everything, the energy transition, AI data centers, electricity and electrical safety continue to be a primary worry and a driving force. But there’s lots of other new technologies of the day that we continue to help keep our customers safe.
What are the other technologies that you’re mostly focused on?
Some of the most current ones are AI safety, the ways in which AI is being embedded in products, and the ways in which humans engage with the safety of AI and products. Our newest outline of investigation, which is a precursor to writing standards, was published in November, and it’s all around the safe use of AI embedded in products.
That feels like a very meaty subject of conversation here. There’s a lot of AI safety debate in our country and in the world, so I want to come back to that.
I just want to start with some foundational questions that I have. One of them is where the authority to tell the industry what to do comes from anymore. When you had a bunch of insurance companies saying, “We won’t pay your insurance claims if the thing that burned down your house wasn’t UL-certified,” that provided an awful lot of incentive for people to go get that testing done to pay for it. At the time, UL was a nonprofit. A lot of that’s changed since then, right? Where does the authority or the incentive to participate in the UL process come from anymore?
It’s a really important question and relevance is a really important strategic concept that we focus on a lot. Who does it matter to if your product has been certified to a UL standard or even another standard? We certify to over 4,000 standards. Only 1,500 of those are actual UL standards. There are other authorities with jurisdiction and standards development organizations globally.
The importance of this is that governments and certainly insurance companies, underwriters, even today — and in the US tort system that becomes very important— are looking to ensure that what they’re underwriting is safe, what various agencies of governments around the world deem safe. How do you continue to build that trust between consumers and businesses and ensure that people believe that the products that they’re using are as safe as the standards allow them to be?
That sounds like a pretty big mix. You still have insurance companies saying, “You need UL-listed devices in your house, or maybe we won’t pay claims,” or, in the United States in particular, if there’s litigation around the safety of the products, this certification is going to be important. You might have some governments insisting on various logos. I think we can all see the certification logos on the products we have.
Is it a mix? How do you as the CEO think about, “Okay, these are the constituents who want this logo. I’m going to go take their needs and tell the industry, particularly the tech industry, which doesn’t like to listen to anyone, that they have to participate.”
Oh, they don’t. I started my career there.
How does that conversation go?
It goes like this, and I’m going to give a really great example. Let’s talk about e-bikes, and in particular e-mobility devices, but e-bikes in New York City. About five years ago there were a couple dozen people who were killed in New York City, and why? Overcharging of the lithium-ion batteries.
Lithium-ion batteries have a different chemical composition. The thermal runaway happens faster. The chemicals are more difficult to put out. In a typical house fire, you have a couple minutes to get out. With a lithium-ion battery fire, you have fewer than 30 seconds to come out alive.
So you’ve got this problem. People are dying. You’ve got this other problem, which is people are excited to use e-bikes because they’re an affordable mode of transportation. They are a very useful item. So how do you balance this?
We at UL Solutions heard from a number of customers, worked with our not-for-profit partner, who is our largest shareholder, UL Standards & Engagement, to write three standards around the safe charging, the use of batteries, and the ways in which lithium-ion batteries were installed in e-bikes. Three standards. We went to New York City, worked with the mayor’s team and the fire services team there, to ensure that those standards were written into New York law.
Once a standard is written into local legislation, if you’re a bicycle manufacturer, you’re not going to manufacture a different bike or a different charger to sell into New York City than you would in Chicago or Toronto or LA. So it starts to proliferate.
The good news is that since those standards were adopted in New York City, deaths have dropped by 75%. There is a real need for the safety of humanity in these standards, and then that becomes picked up by other authorities having jurisdiction, other communities like those other cities I named, or even local private campuses. Universities have expressed interest in, “What are these standards? How do we think about ensuring that a dorm doesn’t catch on fire?”
That’s the authentic approach to how this happens. There has to be the safety science that shows what the answer could be and should be. And then there has to be a recognition that that need is real and that it helps promote that trust between those authorities having jurisdiction, those governmental bodies, and the citizens and the users of products within their jurisdiction.
It’s interesting because the choke point there is retail, right? The city is not going to let you sell a bike without the certification because it’s deemed the bikes without the certifications to be dangerous. Is that consistently the kind of incentive that makes people adopt the standards or the certifications, that you have to stop, that there’s enforcement somewhere?
Not always. We’re going to talk more about this AI standard, UL 3115, but that started with our customers coming to us. We see this a lot, our customers saying, “Hey, as a manufacturer,if there is a standard that we should adopt and that we know our competitors will adopt, that levels the playing field and creates a consistent marketplace.” I spent almost 20 years in manufacturing. Our customers frequently come to us and say, “We see this happening. Help us think about how this new innovation, this new technology should consider what the safety science is.” That becomes the precursor to writing a standard.
Frequently our customers don’t even wait for the standard to be written. They start using that outline of investigation to guide their product design and innovation so that they’re more confident coming back to that insurance question, that if something happens they will not have a failure in safety.
I want to come back to the notion of customers, because UL has been restructured since you’ve been there. You took UL Solutions public. I’m very curious about that set of incentives and what that means.
Every time I talk to somebody who runs a standards organization, and we talk to a lot here at noti.group — whether it’s Bluetooth or HDMI — there’s just some element of being a politician that’s involved in that. You would not think of Bluetooth as a deeply political organization, but they have a lot of unwieldy stakeholders who are pulling in different directions. You were describing it as, “we need to create a market.” With HDMI, maybe you want a feature that no one else wants, and that’s a political problem for that standard. It doesn’t seem like you have that same set of pressures. How much politicking do you do?
We really don’t do politicking. In the standards development process, it’s a consensus process. As I said, our customers frequently come to us with the need for a standard. AI data centers are a great example. Moving to 800-volt DC is a very significant energy need and safety challenge. How do we start building standards around that? We kick that over to UL Standards & Engagement, who’s actually the standards development organization, where they convene technical panels and follow a consensus-based process. There’s some pretty rigorous approaches to that standard development and the consensus grounded in science.
Now getting that standard adopted by governments does take… And again, our standards development organization does this, the not-for-profit. They are involved in ensuring that the right attention is given to the opportunity to adopt those standards and spend their time promoting why it’s an important need, why it’s a good idea.
Let me ask about this structure then, because you are describing the inner relationship between the three parts of UL. It started off as obviously one big organization. It’s now been reorganized into three subsidiaries. Why the change and what are the divisions here?
We were not-for-profit from our founding in 1894 until 2012. We were founded to do the safety science research, the standards development, and the public advocacy. Immediately following the World’s Fair, companies started coming to Underwriters’ Laboratories asking for their products to be tested, inspected, certified. We did that as a not-for-profit, but charged for that, until 2012.
In 2012, our trustees realized that our competitors, many European, who were founded with similar histories as not-for-profits, had the opportunity to both do a better job funding the not-for-profit side and unleash that for-profit energy in an increasingly competitive environment. So in 2012, we split the two.
I joined in 2019 as CEO of the for-profit with the relationships back to the not-for-profit around the standards development and the research. Today they are structured as three separate entities. The standards development organization is the shareholder of UL Solutions. When we went public in 2024, two years ago last weekend, it was a secondary offering and they received the full set of proceeds to fund their endowment for their standards development and research institutes. So we’ve got a separate board of trustees, and four trustees sit on our board of directors. So there is a good strategic relationship, and I think that that’s very important, but we are run completely separately.
So there’s the three organizations: UL Standards & Engagement, the UL Research Institutes, and UL Solutions, which you’re the CEO of. Solutions is a public company, but you’ve got the trustees of the nonprofit on your board. How much do they get to tell you what to do?
I was a public company CEO prior to joining UL Solutions, and I don’t see any difference between this board and my previous board, because there’s a distinction. My previous board, I had Berkshire Hathaway as my largest shareholder, and they did not sit on our board. I was well-trained that as CEO, as the management team, and as the board, we serve all shareholders, not a single shareholder.
I treat that in the same vein here. All of our shareholders deserve equal attention and duty of care, duty of loyalty to all of them.There is strategic value in having the right strategic relationship with the not-for-profit, and that value goes to all shareholders. That’s the way we think about it. That’s the way we treat our board meetings. That’s the way we treat our management decisions.
I’m very curious about the commercial incentives you have running the for-profit part of the organization. I understand you had a lot of competitors that became for-profit testing labs, and I know the Decoder audience enough to say, “Well, that obviously corrupted them. They’re just selling marks now and selling more marks makes them more money and maybe the testing standards have gone down.” And I’m curious how you balance that.
I hear that from our audience a lot, that the financialization of everything has corrupted everything and the trust is gone because everyone’s just chasing dollars. You run a public company, you’ve got shareholders, you’re over here talking about them. How do you manage that? You probably could lessen the standards and sell more certifications, and that would probably be better for your shareholders in the short term, but obviously there’s the long term of the brand and what it means to people and the nonprofit to protect. How are you balancing all of it?
We’ve been around for 132 years and we still speak the words of our founder, which are, “Know by test. State the facts.” If we were to ever deviate from the highest quality standards, if we were ever to deviate from the highest quality science, it would erode the trust that our customers have in us that we’ve built for 132 years, and our business is trust. I fervently believe that we have to continue this long-term view of growth and relevance: grow as far as our influence and our ability to advise our customers and support them, but remain relevant. Tthe only way you remain relevant is if you maintain that trust.
When you say customers, you don’t mean consumers, right? You don’t mean the end user. You mean big companies, governments. How do those customers express their preferences to you in the market?
We have three segments to our business: industrial, where customers tend to be selling their product in the B2B space; consumer, where our customers tend to be selling their products into B2C space; and then our risk and compliance software segment, where those tend to be our largest multinational, global, and strategic accounts.
Our teams are out there working with the new product development teams, the quality and compliance teams, in all of our customers. And our customers express their needs. As they’re going through their innovation cycles, we frequently have a line of sight into their product road maps and how they intend to use technology differently in innovation.
I say frequently, “innovation without safety is failure,” and I think our customers feel that same weight. They don’t want to fail. They don’t want to have a product launch that’s going to harm somebody either in that industrial environment or that consumer environment.
It’s a really open, honest dialogue because we’re there to help them. Sometimes helping them is giving them news that they don’t want to hear. But it’s incumbent upon us to tell them, “These are the facts, this is what happened in the test, and now you have to go back and do something about it.” We can’t advise them on how to redesign their product. That would be a breach of that trust. We have to stay agnostic and test when that product sample comes in.
I’m very curious about that. You said customers don’t want to make products that hurt people. The tech industry says that to us a lot. And in particular in AI, they say this to us. They talk about alignment and safety all the time, and then we can all see the reporting about what chatbots are doing to consumers. Where is that balance? Is it all just industrial applications? We don’t want the AI to run the elevators wrong? Or are you looking all the way to the model capabilities?
We focus on products. We focus on product safety.Functional safety of products would be when you embed software, let’s say, in an electric vehicle, you don’t want to turn the radio on and have the brakes slam because the latest software download changed the if-then-else statements and you find yourself in a safety problem. Similarly, with AI, you want to make sure that AI is not creating functional safety challenges. And we’re hearing from our customers that they also want to ensure that they can profess trust in the models.
Our UL 3115 came from customers coming to us and saying, as a great example, a child’s toy. How do you know that the data that was used to train that AI that’s embedded in a child’s toy was fair, that it remains private, that it’s transparent, that there’s lack of bias in the algorithm? Because all of that determines how that product actually performs, and so that’s the perspective that we have.
But back to your first comment about the technology industry being very resistant to others setting standards or guidelines or regulations, we fervently believe that third-party independent testing inspection certification leads to better outcomes for society.
I mean, I can point you right now to AI-powered children’s toys that are completely off the rails.
And I will just bring that back to, what is the enforcement mechanism? What is the choke point? There’s no New York City that’s going to say, “You can’t sell teddy bears in our town unless the AI has a certification.” I don’t think that exists in some of these markets. Where are you finding the enforcement or the incentive structure that makes them participate?
It’s early days, and I completely agree with you that we’ve got to get our arms around this. There are a number of standards development organizations around the world, not just UL, but IEC, ISO, others that are coming together and saying that this is necessary, this is important. We will continue to advocate that various governments and authorities having jurisdiction, tech industry associations, and others continue to pursue this.
But it is indeed, an uphill battle where the tech industry likes to have their own approach and will cloak themselves in intellectual property and proprietary standards. And I get that. I started my career at IBM, I understand the value of tech and IP. But I’m a lifelong safety freak and I really believe that some of this stuff could make products nherently unsafe, and we need to do our best to prevent that from happening.
Let me ask you the other Decoder question I ask everybody, and then I want to dive into it using that framework. How do you make decisions? What’s your framework for making decisions?
My personal framework is grounded in data. I am a data person and I think you need to have enough data and pressure test it to make a good decision. I believe organizationally in empowering people; if your job is to run X, then you should be grounding in data and making decisions around that, and then there’s a certain level of decisions that should potentially get bubbled up to me. But a lot of times, I think the people closest to the customer, closest to the decisions need to make that.
There is one set of decisions here at UL that I will never overrule, and that is the scientific decisions that our scientists, our engineers, our lab technicians make. Every once in a while, a customer is not happy with a report or a decision that we have made and it can get raised to me, and I think my team has the confidence to know that I will never overrule a scientific or engineering decision.
That seems very important. That’s the heart of the enterprise, to protect the sanctity of the testing.
In the context of AI, but even in the context of batteries, which I want to talk about at length actually, it feels like the market is getting farther and farther away from wanting to comply.I’ll give you the example here. The Biden administration really pushed for AI safety and they had a set of standards that they wanted to promulgate. President Obama was on the show talking about the need for AI safety. And his comparison was, very explicitly, “We failed to regulate social media and hurt people, we’re not going to screw that up with AI. We want the labs to publish their testing at the very least.” Trump administration showed up, all that is basically gone. That Biden-era EO is no longer in effect, it’s a free-for-all. What is bringing the Frontier Labs to the table with you? What is bringing OpenAI or Anthropic or xAI to the table?
I am optimistic that there are global forces around this. Because, again, multinational companies don’t just need to follow regulation in the United States. They need to follow what’s happening in the EU, what’s happening across Asia. And when you look at the influence of different countries and different authorities having jurisdiction in some of these topics, I do think it will expand. But I agree with you, there’s not any kind of top-down-forcing function right now to bring them to the table.
Are you engaged with OpenAI or Anthropic or xAI or Meta?
We’re not directly engaged with OpenAI or Anthropic. We certainly have done a fair amount of work with Meta through the years and most of the hyperscalers and more on the product side. But these continue to be topics of conversation that our chief scientist and our PhD researchers in AI are out there promoting and continuing to try to push the rock up the hill.
You mentioned your new standard, UL 3115. It’s a pretty wide-ranging standard, right? It’s everything from data centers to consumer applications. I think the first two products certified under it are out, or the certifications were received and they’re building control applications, from what I understand.
Yeah. That was the Hanwha Qcells announcement.
That to me is, “Okay, we’re going to certify a building control application to make sure it doesn’t go haywire and turn up the heat in all the units,” or whatever a building control application might be able to do. All the elevators are going to go crazy.
This is just a philosophical question. These AI systems are fundamentally nondeterministic. They’re not predictable in the way that they operate, and that actually is what makes them powerful. There’s the bad side of hallucinations and them posting to their own weird internal Facebook that they’ve built for themselves. And then there’s the good side of, oh, that means they’re creative. They can do software development in a way that a deterministic system really could not do before. How do you test that? What is the mechanism of testing whether an AI-powered building control software is always going to do what it says if the engine powering it is inherently unpredictable?
AI models really rest upon that predictive modeling, but our focus is not on getting into the black box of the code. Our focus is on establishing over 200 criteria around how, internally, when they’re making decisions about their code development, they should think about bias, how they should think about transparency, how they should think about fairness and privacy.
When you say “think,” is it the models thinking or is it the people making the models thinking?
The people designing those models. How are they building out, what is the veracity of the data source that they use to train the models? That’s outlined in our standard of how they should make decisions. I love that you focus on how decisions are made. When I look at UL 3115, I think that it is a standard to help guide those decisions as AI is being embedded in products.
The big opportunity in AI right now is software development. The cost of producing new software is dropping precipitously and may drop to zero because the tools are so good at it, and tools like Claude Cowork and OpenClaw can just go do things for you all the time, which is really fascinating.
That means the number of providers of AI-empowered software is just going to skyrocket. When you describe the market-making capability or the market-making function of UL, that “everyone is going to get this certification so we’re all on the same level playing field,” if the playing field is vast and it’s a bunch of teenagers writing applications in their basements who don’t care about you, it might just totally get away from you. How do you think about that balance of big players who want to participate and get the logo mark versus an entrepreneur saying, “I can make you this building control software much cheaper,” who never actually comes to you?
I think that’s where our customers and what they’re looking for come in, and how they’re going to level the playing field of their competition. At some point, the end consumer does speak. I was in manufacturing for 20 years. I don’t want unsafe AI-powered kilns or metal presses in my environment. There’s a point at which you’re going to want that verification, that validation, that endorsement, that what you’re installing in your industrial environment or what you’re bringing into your home as a consumer is safe.
That’s where I do think the end user has a voice, because they’re going to decide, “Do I want to buy this product or not?” We have plenty of tests we do that have absolutely nothing to do with an actual regulation, but have to do with the fact that our customer has decided that this is important for their brand, for their end consumers, and that drives the demand for what we have to offer.
The other dynamic that’s happening in AI specifically is that the models themselves are getting ever more capable and the idea that you need to build a specific AI application that’s a wrapper around the model that forces it to do what you want, who knows how that’s going to play out. But you can see, “Well, maybe actually I just need a subscription to Claude and I don’t need a subscription to some application that is powered by Claude because Claude can just do it for me.” If those companies aren’t engaged with you, how does this work? If Anthropic and OpenAI and the rest aren’t engaged with you, how does this work?
It’s a really great question because one of the concerns or questions that I actually have about AI comes back to that veracity of the training data. Back in my coding days, it was “garbage in, garbage out” — the more garbage that gets in to train these models, the more difficulty you have trusting that those models actually have the efficacy into the future and won’t just spiral upon themselves and become useless. I believe that should really be appealing to these development companies around, “Does that model have the longevity to continue to provide the answers, the intelligence, the information that is grounded in something that is actually true and correct?”
I’m just going to ask you straight up. Do you think they care?
I hope they care, because it should be self-preservation for them to care.
I mean, they seem to be doing pretty well without caring. That’s why I’m asking.
Well, there’s short term and long term, but we’ll see how this plays out.
You mentioned the pressure to rein in, be more safe, have more control for AI might come from other governments, other organizations. Maybe it’s the states. Where do you see the most pressure on making AI safe come from right now?
It’s interesting, and I know you’ve spoken with some of our large customers recently.I think it’s coming from those large multinational global customers who care deeply about how their products are used in environments and want their relevance and longevity to be out there. They don’t want to find themselves in a situation of failure.
When you talk about those customers, are they coming to you and saying, for something like UL 3115, “This is what we need it to say so that when it tests it meets our needs”? Is that how that standard is developed?
No, they’ve come to us and said, “We need a standard, help us think about it.” And so as we start to develop it, we bring them into a room and then we’ve got our PhD AI researchers in there with them.It’s a dialogue grounded in science with then a consensus of, “Okay, we think that this actually will really help us. Let’s make sure that that’s in there.”
PhD AI researchers are very expensive lately.
Can you pay at the top of the market for those folks?
We’ve built a small and mighty team in this and we feel very good about their thought leadership and what they’ve contributed.
I’m curious because that’s the other arms race. I look at this from the outside and I say, “no one can keep up with these labs. They’re paying all the money. Even the competition between them doesn’t seem to be keeping them in check.” The idea that they’re all going to sign up for a literal checkmark from UL that says they’re safe when they’re all racing to an IPO… I’m just very curious where that pressure is going to come from. I don’t know if it’s going to come from an industrial manufacturing supplier at this point in time. I think it might have to come from a government.
We’ve got to keep pushing this rock up the hill. It’s still early days and it’s important to figure it out.
The other piece of this, as you mentioned, the standard covers data centers. There’s a lot of tension, political and otherwise, around data center build-out in this country and everywhere else. There’s just the electrical component of it, right? If you’re going to do a lot of electricity in a room, you probably want a bunch of UL-certified components in there. Is there more than that in UL 3115 as it relates to AI data centers?
UL 3115 is just really around AI embedded in products. With AI data centers, there’s 70 other standards that we test to today around the safety of the electricity, the components, the chillers, the DC current coming in, the inverters, all of that. Then there’s a whole host that we’re hearing from our customers that with the rapid change in the amount of power, the rapid change just in the thermal dynamics of GPUs versus CPUs, the rapid change in the way that you’re going to put a megawatt of power into a rack or that you’re shifting the water cooling. There’s a whole set of new standards way outside of UL 3115. We’ve had two AI data center summits with customers on how they’re thinking about their needs for standards in data centers and how we can rapidly help them continue to develop on their innovation pace in ways that they can feel comfortable will be safe in the future.
Do you think they’re going to slow down their build-out goals in order for these certifications to take hold?
No, I think they’re expecting everybody else to pick up the pace.
Let me ask you about the other race condition, because again, I think it would be great if everything was certified and everything was safe. And then I look at the markets that we’re in and there’s just an explosion of things all the time. The one that really strikes me is everything with a battery in it. We’re profiling more and more of these companies here at noti.group all the time. If you’ve got a lithium-ion battery and a high-efficiency motor and a dream, you can start a company that makes 500 products today. We’ve profiled some of them. Hoto and Fanttik are two that have just sprung up, and they make tools. And the other day I saw one of those companies had like a lithium-ion handheld Sawzall, which is just a lot of power. If you’re going to put that much torque in a little motor, that’s a lot of power you’re going to draw.
I look at these companies and they’re obviously all based in China, and whether or not they have UL certification is irrelevant to the consumers buying all these products.Because they’re legitimately cool products and there’s a race of innovation happening there and it’s all just on Amazon, and Amazon doesn’t seem to be enforcing any of these standards at all. How do you think about that? How do you think about the prevalence of high-powered lithium-ion batteries everywhere without the consumer demand for your certification?
First of all, Amazon is a great customer of ours and you can drop down and see if something’s been UL certified.
They should make that more prominent. I think you should probably tell them to make that more prominent.
They are a great customer of ours. And indeed, innovation is fast. Batteries are exciting and dangerous, and we continue to work with customs agents, various authorities having jurisdiction, and our customers to help educate how to keep those lithium-ion batteries safe, particularly if you’re importing into the US markets. A great example of this was about a decade ago when hoverboards were exploding and—
Yes. So the Consumer Product Safety Commission came to UL Standards & Engagement — UL Solutions, at the time — and said, “Can you very rapidly write a standard and help us get our arms around this?” And we did that. And again, it helped with the safety.
One of the key areas that we have is market surveillance and anti-counterfeiting. So we are constantly working with customs agents and also with competitors who are putting the UL mark properly on their product, who will highlight a product that’s in the market that’s not meeting the codes and the standards. We’ve won some significant lawsuits around these cases where there are unsafe batteries, uncertified situations when they’re not in compliance with the law.
Amazon and UL together, you’re suing some e-bike manufacturers that are selling on that platform with fake UL certifications.
You have to catch them. So you have an enforcement team that’s actually scanning Amazon for fake UL certifications?
We have a team that works with anyone who wants to highlight that they think that there’s a unsafe situation or a counterfeit UL mark, our team responds.
Can you scale up fast enough to meet the flood of new products? Again, with a lithium-ion battery, a high-efficiency motor and a dream, you can start a company and make 500 products tomorrow. Can you scale up to meet that flood in terms of testing?
We can absolutely scale. We’ve scaled all over the world, and we like to say we meet our customers where they are. If you’re doing innovation in China, we’ve got our testing labs in China ready to go. If you’re doing innovation here in the United States, we’ve got our labs here ready to go. If you’re manufacturing anywhere in the world, our field inspection team will visit your plant four times a year to ensure that you’re manufacturing in accordance with the standards that we tested to. We’ve been growing, and we’ll continue to grow.
Do you make the case to Chinese manufacturers, “Hey, if you have this UL certification, you will make more sales”?
And there’s data showing that the US consumers actually care about this?
Yes. And manufacturers in China, all across Asia, they know that if they want to get their product into the US market, they need to follow the safety standards and we’re there to certify for them.
I know you just made an acquisition to expand your testing presence in the EU. How big is your presence in China compared to the United States and the EU?
We report revenue by point of customer. If you’re a US customer but we’re testing your product in China because you happen to have an innovation center there, we will report that in the US. Last year, I believe 42% of our revenue was point of customer in the United States, 25% of our point of customer is China, about 17% is EMEA, and then the rest of the world. So China has been very important for us. We’ve been in there for 40 years. We’ve got a joint venture partnership and we have independently wholly owned labs as well. We work very closely with a significant number of Chinese manufacturers to help them get their products to markets all over the world.
The relationship in China has been the point of contention recently with the Trump administration.
During the Biden administration, the FCC launched something called the Cyber Trust Mark, which was supposed to certify IOT devices specifically as being safe. UL was supposed to be the lead administrator sort of writing the standard for that.
Brendan Carr, who is well known to listeners of the show and my other show, noti.groupcast, is the current chair of the FCC. He has a lot of ideas, Mr. Carr, and he decided that your relationship with and your work in China somehow was corrupting. Something happened, which I’m dying to know what exactly happened, UL is no longer participating there and a Trump donor’s company is now the lead administrator. What was that conversation with the Brendan Carr FCC like around the Cyber Trust Mark?
We’re a proud American company. We’ve been here for 132 years. If our government asks us to serve, we of course will step up and serve and support whatever they need. And so we were really pleased with the work that we were able to do as a lead administrator to help set up the parameters of that and work closely with the FCC.
When the FCC decided that they wanted additional requirements from the lead administrator, we realized that we weren’t the best fit for that. And we easily transferred that intellectual property and that work back to the FCC and they continued down their path.
What were the additional requirements?
Those requirements were really around how they wanted to run the program in the future. And it was a set of requirements that we didn’t feel that we were the best participants to do.
That sounds very bureaucratic and administrative. I’m looking at Brendan Carr, he basically accused you of being beholden to the Chinese government. Did you ever respond to that directly? How would you respond to that now?
We have been very transparent about our operations, our relationships all over the world, and we continue to be so.
Brendan is not a subtle man. He doesn’t do things in the shadows. He says you’re beholden to the Chinese government and you’re saying that is absolutely not true. And it was enough for you to walk away and say, “We don’t want to be a part of this.”
I think that where we all landed is the right answer for all of us.
Similarly, the FCC right now is banning a bunch of Wi-Fi routers simply because they’re made overseas. Obviously, you looked into this with the Cyber Trust program, you have these other certifications. Do you think it’s correct to say any device made in China is an inherent security risk?
We have long and deep relationships with customers all over the world and long and deep relationships with customers in China. Those customers see value in testing to standards and following regulations and rules, and we will continue to support them in the ways that they need.
Do you think that there’s a potential certification for devices made overseas that US consumers or US companies can say, “Okay, the supply chain risk that we’ve heard about has actually been mitigated or the appropriate controls are in place”?
I think the set of standards that exists today really facilitates that trust that consumers should have with products that are made anywhere in the world. If you’re adhering to those standards, if you’ve got a third-party tester that has endorsed and certified that you’ve met that, I think that’s the mechanism that does that.
I just see the proliferation of products and I’m wondering if maybe all the way at the end, you say you have some data that says consumers prefer UL products and I hope that’s true. But then we at noti.group cover, I don’t know, cameras for your house that have just gaping security holes in them, where there’s just like live feeds streaming to the whole internet at large because there isn’t a security apparatus or an updates apparatus. We do see that with routers. We’ve seen a lot of hacks with consumer-grade routers. I’m just wondering where that extends to, particularly in software.
You buy a power strip, you can see the logo on the back of it, or maybe Amazon will at least show you the logo and maybe you’ll still buy the cheaper one because you don’t really know what it’s for. With these software products or these hardware products that are running a lot of software, it’s not right in front of you. So how do you make that case?
We do have a service that focuses back to that functional safety of that embedded software or that efficacy of that product being connected to the internet and its cybersecurity. There are standards around that and there are ways to approach it, but I think what you’re highlighting is an opportunity to make consumers more aware of what they should be looking for and demanding as they purchase their products.
Do you think that this is just a market problem? I think maybe this is what I’m coming back to over and over again throughout this conversation. I really wish the consumer market demanded more of these companies. But that’s just a collective action problem.
I think it’s totally rational for most people to just pick the cheapest power strip that Amazon has on the first page, and I can’t really blame them for it. At the same time, maybe we don’t have a federal regulator who’s going to step in and say, “Okay, to keep everybody safe, we’re going to demand the certification.” Maybe we don’t have insurance companies who are going to go demand them of Amazon.
And then when it comes to software, it seems like the tech industry in particular is utterly resistant to anyone telling them what they can do. And the idea of a UL certification for firmware updates on your cameras on a cadence is just maybe the hardest sell of all. So if it’s not going to be the consumers that do it and we have a government that seems checked out of it, this is what I keep circling and what I was most excited to talk to you about. Where does the pressure come from for people to participate in a safety program?
This is, to me, one of the exciting pieces of when we went public and funded the endowment for our not-for-profit. We’ve talked a lot about the standards development organization, UL Standards & Engagement. We haven’t talked so much about the UL Research Institutes and the areas where they’re focused.
One of their institutes is focused really on AI safety and how should the world be better educated on what would be considered safe and where they should dig deeper. There’s a lot more to come, not just on the research around that, but also around the step to raise the consumer’s consciousness of the fact that, if something’s free, you’re the product. Back to social media, if you’re using it and it’s free, you’re the product. How do you protect consumers from that? It’s a really important concept and I still think it’s early days on this in AI.
You’re in the business of selling safety. I think that’s a fair way to describe what UL does. Do you think that the way that Dario Amodei or Sam Altman talk about AI alignment and safety is effective? Because their pitch is, “If you don’t let us do whatever we want, we might kill everyone in the world.”
I think they’re trying to ground in science and engineering and certainly in different ways to use AI and different models. LLMs are one approach, but there’s lots of others. It’s probably a false choice to say, “Let us do what we want and therefore we’ll prevent this from destroying.” I think you need both.
When you say both, you mean outside testing and validation or government regulation? What do you mean by that?
All of the above. It would be ideal if the tech companies came together and said, “Here’s what we believe collectively will help keep the world safe and then we’ll adhere to that,” versus letting each one just go off and follow whatever path they think is best.
Again, you manage a complicated safety structure, so I’m just asking you abstractly. If you had to pick a structure for that to happen in, does that look like a government regulation? Does it look like an industry body? Does it look like a nonprofit that controls a for-profit testing center? How would you design this?
I think where it’s heading is toward more of the standards development organizations and the industry bodies coming together, because they will be the most knowledgeable about what should work. You always want that deep industry expertise when you’re developing any type of safety standard that then moves into regulation. If you start with regulation top-down, you don’t always get to the right answer and it’s not always grounded in the science and the engineering that it needs to be. I would advocate industry groups with standards development organizations.
Jennifer, what’s next for UL? What should people be looking out for?
For us, it’s going to be this continuation, as I like to say, of growth and relevance. We will continue to be on the forefront of innovation and continue to find ways to make safety relevant for whatever innovation comes next. I can geek out and get excited about quantum for a second as something that’s the future extension of what’s post-AI or what makes AI better. But those are areas that we continue to try to stay involved in and think about — not just the electrical safety of 132 years ago or the electrical safety needed in data centers today, but what is coming next.
I do like that even though it’s been a long time since you’ve been at IBM, you brought it back to quantum. It’s very IBM of you. I really appreciate that.
It is. I have to say, I’ll give a little IBM shout-out because I love Arvind [Krishna, IBM CEO].I was walking through O’Hare and they have their IBM quantum chandelier sitting right there next to the dinosaur. And I mean, I skidded to a halt while I was pulling my luggage. I’m like, “Oh my gosh, it’s a quantum chandelier.” It’s really exciting because I am here in Chicago and we at UL have been involved in creating the quantum ecosystem that Chicago has been promoting and we’re excited about what’s next in that set of technology. We can talk about that another time.
The second someone ships a working quantum computer that does economically relevant tasks, we’ll have you back to talk about it.
I don’t know when that’s going to be.
Closer than we think, I hope.
That’s a bold prediction. Thank you so much for being on Decoder. This was great.
Thanks. Nice to meet you.
Questions or comments? Hit us up at [email protected]. We really do read every email!
Decoder with Nilay Patel
A podcast from noti.group about big ideas and other problems.
SUBSCRIBE NOW!
[Notigroup Newsroom in collaboration with other media outlets, with information from the following sources]






