BRIAN SANTO: I’m Brian Santo, EE TimesEditor in Chief, and you're listening to EE Times on Air. This is your Briefingfor the week ending August 2nd.
BRIAN SANTO:我是EE Times总编Brian Santo,你正在收听的是EE Times全球联播。这是截至8月2日的一周新闻概要。
We want the Internet of things to be smart,but being smart requires processing power – which will be lacking in millionsof IoT devices. It’s what we call in the business “a conundrum.” But – theremay be an answer! You’ll hear what that is.
我们希望物联网变得更智能,但智能化需要处理能力,这正是数百万计的物联网设备所缺乏的。这成了一个“复杂难题”。但是,我们可能有答案!你将会听到答案是什么。
As we reported last week, the biggestcompanies in the world are beginning to compete with their own chip suppliers.The latest example is Alibaba, which just released a high-performance processorof its own design. Alibaba’s move is significant for technological, financial,and political reasons. We’ll look into that.
正如我们上周报道的,世界上最大的公司开始与自己的芯片供应商竞争。最新的例子是阿里巴巴,刚刚发布了自己设计的一款高性能处理器。鉴于技术、财务和政治原因,阿里巴巴的动向举足轻重。我们来看一下。
Also, you’d think that the people buildingautonomous vehicles are using sound design principles.
此外,您可能认为设计建造自动驾驶车的人肯定会采用合理的设计原则。
JUNKO YOSHIDA: Those with an IT backgroundwho have grown up in the culture of they must “move fast and break things”don’t necessarily do that. They tend to go for alternate approaches.
JUNKOYOSHIDA:那些具有IT背景且在必须“快速行动并打破局面”的文化中长大的人并不一定如此,他们倾向于采取其他方法。
BRIAN SANTO: “Alternate approaches.” Youare going to want to hear the rest of this. We’ll get to that in a minute.
BRIANSANTO:“替代方法”。你会期待听到这是什么方法,我们马上向你揭晓。
First – the Internet of things is going tolead to a world that is smarter. We’ll be installing sensors farther andfarther away from data centers – along highways to make driving safer, intofarm fields to monitor how our food grows, into remote areas to track weatherpatterns, and much, much more. Adding intelligence has always meant adding moreprocessing, which also means drawing more power – but the vast majority of thedevices we install in these remote areas – at the farthest edges of the network– will, by necessity, lack sophisticated processing capabilities and will bevery low-powered. How to reconcile that?
首先 - 物联网将走向一个更智能的世界。我们将会在距离数据中心越来越远的地方安装传感器 - 沿着高速公路使驾驶更安全、进入农田以监控食物生长、进入偏远地区以跟踪天气变化模式等等。增加智能总是意味着增加更多的处理能力,这也意味着增加功耗,但我们在这些偏远地区安装的绝大多数设备都是在网络的最边缘,一般都缺乏复杂的处理能力,并且功耗非常低。如何调和这一矛盾?
Sally Ward-Foxton is one of ourcorrespondents in London. She keeps on top of trends in artificial intelligencefor EE Times. In a recent story, Sally wrote about a group of researcherslooking into ways to distribute AI at the network edge. They call theirapproach to machine learning “TinyML.” International correspondent Junko Yoshidacaught up with Sally.
SallyWard-Foxton是我们驻伦敦记者,她为EE Times把守人工智能趋势的最前沿。在最近的一篇报道中,莎莉介绍了一组研究人员如何在网络边缘配置人工智能。他们将自己的机器学习方法称为“TinyML”。国际通讯记者Junko Yoshida采访了Sally。
以下为英文原稿,Enjoy!
JUNKO YOSHIDA: Let's go back to the basicshere. I want you to explain what's TinyML, and what is it for?
SALLY WARD-FOXTON: So TinyML stands forTiny Machine Learning. Not just for Edge devices, but for devices at the VeryEdge. So machine leaning's already in Edge devices. If you the Facebook app onyour phone, you're already machine learning inference on your phone. So whatwe're talking about here is machine learning at the Very Edge. So somethinglike ultra-low power sensor nodes, gadgets that use energy harvesting orsituations where there's barely any power available at all.
As far as defining TinyML, in the recentmeeting of the TinyML Group, one of the speakers, Evgeni Gousev from Qualcomm,defined TinyML as "machine approaches that consume less than amilliwatt." "In Qualcomm's experience, he said, "the milliwattreally is the magic number for applications in a smartphone that Acosta's alwayson," so under a milliwatt is what we're aiming for. And there will be awhole ecosystem springing up around this application, but it's really stillemerging right now.
JUNKO YOSHIDA: Right. So we are heretalking about how best to enable ultra-low power machine learning, not just onsmartphones, but all the way down to the sensor node. So I just want you tobreak it down. Is this a matter of streamlined framework for training to makethis TinyML possible? Is this a matter of framework or some sort of a newtechnique we're talking about here? Or simply a new low-power hardware that weneed?
SALLY WARD-FOXTON: So there are techniquesthat we use today in machine learning for reducing power. We can do things likequantization, where we reduce the precision of the numbers that we use in themodel to make the model more efficient.
In the TinyML meeting, one of the Googleengineers, Nat Jeffries, spoke about cascading models. So instead of runningone large model, he broke it into three smaller models. So say for speechrecognition, the first model might just be deciding whether there's any soundhappening. And if there is, it activates a second model, which decides whetherthat sound is human speech. And then that triggers the rest of the model, whichis more power-hungry and so on.
So only a small low-energy part of themodel is running, unless it's needed. And that can save lots of power.
JUNKO YOSHIDA: So rather than doingeverything in one shot, you are truncating the AI process in several differentparts. Is that it?
SALLY WARD-FOXTON: Exactly, yeah. Kind oflike when we used to talk about ultra-low power microcontrollers and onlywaking up certain parts of the device as they're needed to save power.
JUNKO YOSHIDA: Yup. What about software andhardware? What sort of inventions or new developments or improvements areneeded to make this ultra-low power machine learning possible?
SALLY WARD-FOXTON: So yeah. In terms ofhardware and software solutions, these are definitely still emerging. Google'sworking on building a version of TensorFlow for microcontrollers. There'salready a version called TensorFlow Lite, which is primarily for mobile phones.They're adapting it for microcontrollers.
On the hardware side, there are severalspecialist companies working on ultra-low power accelerator chips. At theTinyML Group meeting, there was a presentation from GreenWaves Technologies,based in France. They've developed an eight-core accelerator that uses RISC-V.They reduced the clutch speed in the core voltage to get it to run on barelyany power.
JUNKO YOSHIDA: Wow! That's interesting. Soin your story, you wrote industry discussions of how to proceed with ultra-lowpower machine learning was overdue. And I couldn't agree with you more with youon that one. But Sally, give me your take: Where do we stand, and who in thehardware and software space are leading in this effort for the ultra-low powermachine learning?
SALLY WARD-FOXTON: I think there'scertainly a feeling that new applications are being held back because thehardware isn't there yet and the software frameworks are not there yet.Google's really taking the lead on this one. They've clearly identified this assomething important that they won't surface you with TensorFlow Lite. And onthe hardware side, I think microcontrollers definitely have the edge at themoment. They are just totally ubiquitous.
All these ultra-low power sensor nodeswe're talking about, they probably have a microcontroller in them already. It'sa mature technology, relative cheap, everybody knows how to use them. AndGoogle is backing microcontrollers as well. So microcontrollers just have amassive advantage, really.
That's not to say that'll always be thecase. Specialized hardware might make some inroads, but overall I think themicrocontroller will be very difficult to unseat from its prime position.
The GreenWave speaker, Martin Kru, he saidthat things are moving so fast that for specialized chip companies, the dangeris they end up being really good accelerating what everyone was doing lastyear, which is obviously not good. So retaining a bit of flexibility for futuremachine learning algorithms, that might be the key there.
BRIAN SANTO: Last week EE Times launched aseries of articles – what we call a Special Project. The series explored thevarious ways the biggest companies in the world are remaking the semiconductorindustry. They include Amazon, Baidu, Google, Facebook, Microsoft.
Correspondent Nitin Dahad’s contribution toour Hyperscaler Special Project was about how the big internet companies arebeginning to compete with their own IC vendors.
The day we published the package, as if oncue, one of the hyperscalers – Apple – bought Intel’s modem business, anacquisition that will have far-reaching repercussions through the semiconductorindustry. Apple used to be a big modem customer of Qualcomm’s; that’s nowlikely to change.
That same day, another hyperscaler,Alibaba, announced it had designed its own new processor. Here are Nitin andJunko Yoshida again to discuss the very many ways the new processor issignificant.
JUNKO YOSHIDA: Nitin, you were part of thistheme that we at EETimes just launched last week the new Special Report focusedon hyperscalers' impact on the semiconductor industry. Given that internetplatform giants are getting into a host of vertical business segments, which bythe way includes their own chip designs, how significant do you think Alibaba'smove is? You know, Alibaba's move to design its own chips. Tell me your take.
NITIN DAHAD: Okay, yes, Junko. So just torecap, as we highlighted in the Special Report and in EETimes On Air last week,many of the large internet platform companies-- and these include Facebook,Amazon, Apple, Alibaba and Google-- they're increasingly getting impatient withexisting roadmaps and timelines from the semiconductor industry. And going thedo-it-yourself route. For all kinds of reasons.
Alibaba's move to design its own chip ispart of this trend. And I think you'll probably talk about this a bit later.It's also strategic. It's also significant for China, since it address thecountry's ambition to be more self-sufficient in semiconductors as part of theMade in China 2025 initiative. So in effect, this is symbolic both for Chinaand for RISC-V.
JUNKO YOSHIDA: Got it. Actually, as Ibriefly mentioned before, over this past weekend, I had an opportunity toquickly catch up with Xiaoning Qi. He's the Vice President of Alibaba Group. Hewas previously the CEO of C-Sky, who developed their own homegrown 32-bitmicroprocessor for the embedded market.
So Xiaoning's team has chops to do variouschips, but what they're doing now under the umbrella of Alibaba is quite interestto me, and when I talked to him, he said, You know, Alibaba's chip groupdoesn't plan to sell the newly designed RISC-V chip. Rather, it tends to offerwhat he called "chip templates" for other companies.
So my question to you is, what is theperformance of this RISC-V chip, and what are the target markets for this?
NITIN DAHAD: It's actually absolutely rightwhat he says. What they're going to do is, they're going to sell their own chipplatform 4 and release parts of the code as open source on GitHub to stimulaterelated development. So really, this is an enabler or RISC-V development inChina. As you say, they're not trying to sell their own chips.
And as regards performance, Alibaba claimsa major breakthrough with what they call a Xuantie 910 chip, which theyreleased last Thursday. It's said it's 40% more powerful than any other RISC-Vprocessor to date.
更多...
——听全球专业科技资讯,只在EETimes
用户评论