Elevating the Web Platform with the JavaScript Framework Community (Google I/O ’19)


[GOOGLE LOGO MUSIC PLAYING] NICOLE SULLIVAN: In the
summer of last year, Chrome began collaborating
with frameworks closely. It started off tentative,
because there’s history there, right? But over time it has grown
into a really important way that we develop new
APIs and that we ensure that the things
that we’re building are actually going to be
helpful for both frameworks and for developers. And we’re going to share a
bunch of the cool collaborations that we’ve come
up with this year and that have come out of that. But first, I’d like
to tell you a story. One day when we were
all working together, Sebastian, who is on the
core team at React and Addy were talking about image perf. Addy works at Google. Now these are two
smart people who deeply care about performance. And Addy was talking
about bytes over the wire. And he was saying we’ve got
to reduce the number of bytes over the wire. And Sebastian, on
the other hand, was saying we need
layout stability. We need images to appear
all at the same time, and we need to preload them
so that the users are never getting that weird,
uncanny, valley thing, where the page is visible, but the
images haven’t loaded yet. Now again, two smart
people who both really care about performance prioritizing
completely different things. What do we make about that? And then it occurred to me. This is the 10 year
anniversary of Big Pipe. It’s a revolutionary
technology that allowed Facebook to
deliver parts of the page independently from the others. So for example, if you
look at the screen here, the Compose view could load
completely independently from the Feed component. It led to massive
performance improvements, as you can see from this
graph, particularly on Chrome. And this is when it
struck me, Facebook has had Big Pipe
for 10 years now. Of course they see
performance differently. They haven’t dealt with a mess
of route level code splitting in ages. I think that we can bring
all of the power of Facebook and Google internal
tools via frameworks to everyone on the open web. Together, working together,
we can take it even farther. We’re on the brink of a
performance revolution, led by frameworks, inspired
by powerful, battle-tested internal tools and technology. And we all have a
role to play in it. No matter what your
role in the web is, we’re all going to
need to play a part in making this successful. So whether you’re a
framework, or what we tend to call a
meta-framework, which is like your Next or your
Nuxt, or even your Angular CLI, those sort of wrapper frameworks
have a huge role to play. Bundlers, package managers,
application authors, obviously, node module authors–
and we’ll get to what folks who author
node modules can do, as well. And of course, browsers– at Chrome, we’re just super
excited about how we can help. So today we’re going to talk
about a sort of year in review, just start off by sharing all
the collaborations that we have going on with frameworks. And then we’re going
to talk about adding a little more nuance to
our performance goals. Finally, we’ll talk about
secrets of Facebook and Google internal tools. What this will be useful for
is most of the frameworks are going in the
direction of these sort of internal solutions, anyway. So this will give us a peek into
the future of what frameworks will bring us. And then we’ll finish up with a
bit about bundle bloat and NPM modules. By the end of the
day, everyone should have an idea about how they
can participate in this future. So when I joined Chrome
in June of last year, I started reaching
out to frameworks, asking questions like, what’s
your wish list for the web platform, and then
connecting them with the engineers
working in those areas. I also started reaching out
about new APIs, asking, hey, is this going to work for you? Do you want this
one or that one? Which one looks better? It grew from there. We can’t possibly talk about
all of the collaborations we’ve done this year. Because so many frameworks
have helped us out. Shubhie and I have reached
out to– more times than I can count– to
different frameworks to ask them little
questions about APIs or what would be a better
way to handle x or y. And they’ve been
incredibly helpful. But we picked four things–
it was hard to pick– to talk about today– code chunking, scheduling,
isInputPending, and Display Locking. So first up, let’s talk
about code chunking. One of the fundamental
goals of the browser is to handle user
interaction instantaneously. If someone clicks or taps,
we want them to never feel like there’s any distance
between what they’re doing and them taking action
on whatever they’re trying to interact with. JS runs in the same thread as
user input is handled, mostly, you know, caveats for
the composite thread. So in order to
keep the UI snappy, JavaScript tasks need to be
broken up into small chunks. If the user clicks when
a long task is executing, represented by the
long yellow bar, they might have to
wait a very long time. This graph is really hard
to understand on a slide. So don’t worry
about that at all. What we really want you to
notice is the pink part. An engineer on our team
on our team, Katy Dylan, did a big analysis
of tap latency, queuing and handling
time, and found that the top contributor to
it was actually V8.execute. That means the application’s
JavaScript execution. At any given time, you
have between 10 and 50 milliseconds to
execute JavaScript before you’ll block
user interactions, depending on what they’re doing. So how are we supposed
to fit into that deadline realistically? This is where
frameworks can help. Frameworks like
Vue and React have been starting to break up their
render work into tiny chunks. For example, React
experimented with yielding between nodes of the
render tree that includes– sorry. Yielding means pausing to other
cued up work, or browser work, to execute. So it includes taps, clicks,
and other scripts on the page. This is great because
it allows the browser to process user clicks
when they happen during the frameworks
render cycle. We now have these tiny
chunks of JavaScript executed and the application
author didn’t need to understand it at all. The framework just
manages it for you. This is fantastic. React built a scheduler to
allow these bits of code to be efficiently executed. Vue is also experimenting
in this space. And Ember has built a scheduler. So everybody’s working
toward some common goals. But unfortunately,
a lot of the code is outside of the
framework’s control. That means that we have
a coordination problem. A single framework doesn’t
control the entire app. And any other code
on the page can starve the framework scheduler. Another challenge for
framework schedulers is that they lack
adequate signals that would let them know when to
schedule things and when not to– for example, things
that the browser is doing, like
garbage collection. As a result, frameworks
reached out to Shubhie and I and wanted to talk
about the idea of making an in-browser scheduler. We both thought that
was pretty interesting. And so we decided to pursue it. We spoke to Maps. We spoke to Airbnb, to
Ember, Angular, React, Vue, and many others to get a sense
of their scheduling needs. We had a design session
with the React core team. We studied a bunch of different
scheduler implementations. And we think we’re starting to
get the shape of the problem. So how does browser
scheduling work today? Let’s take a walk through it. There are four basic
priority levels– immediate, render blocking,
default, and idle. The first two, immediate
and render blocking, lead to bad user experience. They both block clicks,
and rendering, and taps, and anything else like that. So we need to use these two task
priorities as little as we can. The last queue, idle, is often
too late for important work. It’s also vulnerable to being
starved by basically anything going on in the other queues. So this priority
can’t really help us. The default queue is sort of
the junk drawer of the web. It contains almost everything. It’s got script. It’s got async callbacks,
browser-side async work, internal work, garbage
collection, network fetches, and script loading. That’s a lot. So what can we do
with this mess? First, we need to
move to non-render blocking queues for anything
that isn’t absolutely urgent. Instead, we should
defer everything we possibly can to that
normal default task queue. But that means more tasks are
going to fall into the default bucket. And we already said
that the default bucket is the junk drawer of the
web, right, completely full. To make that work, we also
want to add three more priority levels to defaults–
high, medium, and low, so that we can begin to manage
that work more efficiently. Have you written a web scheduler
for your product or project? We’d love to hear from you. Please reach out. We’re starting to prototype. We’d also love framework
authors to try early versions. We’re already talking
to a lot of you. But if not, we’d love
to work together. The next API we’d like to
talk about is isInputPending. It’s a shorter term solution
to some of the scheduling difficulties we’ve had. It’s something that we
were able to ship quickly. And it allows a
framework or a developer to check if a user
action is pending. Remember how we told
you that frameworks were experimenting with yielding
between nodes of the render tree? It’s a lot more
efficient for them if they can check if they need
to yield rather than actually yielding every time they can. We’ve been collaborating
on a short term solution to make that more performant. If the framework
calls isInputPending, they can tell if their
work will be user blocking without having to yield. So instead of yielding five
times in this example, maybe they yield only once. Andrew and Nate from
the Facebook team committed this code to Chromium. And we’re pretty
excited about that. The next API I’d
like to talk to you about is called display locking. It allows updates
to a locked subtree to not be rendered immediately. This is super
important when you want to do things like
have a scroller and have stuff off screen. Virtual dom
implementations can use this for finer grained control
when doing framework rendering. And it’s also useful
for any kind of widget, like a scroller or tabs or
a carousel or anything that has content that
isn’t being shown, because that content can
be updated without paying any rendering costs. The feedback we’ve gotten
so far from the React team helped shape the API. And we’d be very excited
for more folks to try it. What’s super clear to us is
that when we work together with frameworks and browsers,
the result for both developers and end users is
significantly better. We’re really excited
about continuing to collaborate with frameworks. These are links to details
about the particular APIs we talked about in this section. Please open GitHub
issues for comments and questions and ideas. Let us know if you have
a scheduler that we should check out. We’d love to hear from you. So we just finished
up talking about some of the collaborations
that we’ve done in the last year
with frameworks. Next up, we’d like to talk about
our goals for user experience and how we want to add some
nuance to those performance ideas. In particular, we want
to talk about page load time and single page apps. And we want to talk about
budgets for total resource sizes. So loading perf is an
incredibly important aspect of user experience. But on the other
hand, today’s metrics don’t tell a full story
of the trade offs. Let’s dig in. Absolutely everyone
wants users to be able to interact with
their application as soon as possible. But developers have had to make
a really difficult trade off. And by and large, they’ve
chosen slower initial load time in order to have really
snappy single page app transitions afterwards. Our metrics don’t
capture that trade off, because single page app
transition timing is really hard to measure. What we want is that
application authors no longer need to make that trade off. Now we want to be really clear. Loading perf is a very important
aspect of user experience. So how can we meet
budgets that were designed for average phones and
still have feature rich apps? We’ve said that you need to
have all your critical resources loaded for a route in 170kb and
that includes CSS, JavaScript, HTML, and data. But that isn’t super realistic
for a feature rich application, especially if they need to
compete with native apps that don’t have code loading
constraints, though they have other issues. And the answer really can’t
be let’s cut all the features. That would be sad. And the application wouldn’t
succeed at its business goals. So what do we do? In fact, 170kb is realistic
when we consider it only for the initial code and data. What if we loaded
everything else on the page only when we needed it? We could achieve
that first impression experience and those
snappy single page app transitions afterwards. We’d meet that initial budget
without limiting features. Obviously, route
level code splitting is a good step if you aren’t
already code splitting. But it’s still too much code. We need component
level code splitting. Keep in mind– this
is going to mean that some parts of
the page continue to rely on server side rendering
until we’re able to get the required resources. That’s OK. We’ll show you how. We just shared a vision
for incremental loading for the page. And now I’ll hand
it over to Shubhie to talk about keeping
initial sizes under budget. SHUBHIE PANICKER:
Thanks, Nicole. So that was a really
nice vision of how we might achieve progressive
loading so we can hit those initial resource targets. So yeah, let’s
look under the hood and see how Google and
Facebook are tackling this. There’s a few differences,
but quite similar goals. And hopefully, this can
give us some inspiration for what we can bring
to the larger ecosystem. Now there’s always a place for
early ideas and experiments. But everything we want
to cover in this section is really about these
battle tested technologies that have been proven on large
scale production applications. So at Google, these are apps
like Image Search, Google News, Hotels, Photos, and many more. And at Facebook, this is
Facebook.com as well as the new Facebook.com. So I will caveat this with
saying that I’ve personally worked on and led many
parts of the Google side of the infrastructure. So I’m deeply
familiar with that. But on the Facebook
side, my knowledge is from watching, like,
two videos and tech talks and talking to a couple
of Facebook engineers. So with that said,
let’s dive in. So let’s imagine a user
planning a trip to India. And they visit our
Hotels product. Now we could load all
of the code upfront. But then there could be a
ton of features in there that the user will never
interact with or unlock. So let’s start with the most
simplistic loading scenario. Imagine that this was all
written with a simple naive blindside framework. And now on loading
hotels, you have to go download all the code. So this is typically the HTML
followed by the JavaScript CSS, followed by the data. And then once we have
all of these resources, the browser can render the page. The problem is that now the user
is waiting a really long time before they can see or
interact with anything. Plus, there’s a ton of
features that now we have pushed down that the
user doesn’t care about. So looking at a basic
server-side rendering scenario, we might get to visually
complete sooner, because now we have the server
working for us doing all the heavy lifting
of getting the data, rendering all the markup,
and shipping that down. However, the page is not
necessarily interactive yet. Because often client
frameworks can take some time to refresh the data
and hydrate themselves, kind of redoing a lot of the
work that the server has done. So at this point,
I would recommend watching Jason and Hussein’s
talk at 9:30 tomorrow. They’ve covered this
full spectrum of loading and rendering
techniques, everything from client-side rendering
to server-side rendering, static rendering,
and kind of dive into the nuances and trade offs. Our talk today is not
about all of that. It’s primarily about
what has worked at scale for Google and Facebook. So server-side rendering
can be an improvement. But it creates this
problem, this uncanny valley where the page looks
ready, so the user starts interacting with it. But then– it’s not
interactive yet. And this has been coined as
rage clicks in the community. The users are clicking
away in frustration. A second problem with
server-side rendering is that it can be slow to
get pixels on the screen. If the page is quite complex
and there’s a lot of back ends to talk to, and some of those
data back ends are slow. So going back to our
hotel’s example– so let’s zoom in. And let’s say the user clicks
on the more filters widget. And now we know for a fact that
this is an interesting feature. So we know to go download
the code for that. Oops, sorry. Yeah. So in a nutshell here, we are
sending down the minimum code initially and letting
user interaction, like those interactions with
the filter or the slider, dictate which code needs
to be fetched later. In practice, a lot of the
stuff has been preloaded. So let’s look at a
loading scenario. So initially, they send
down the minimal code. Now the page is able
to visually complete. Soon the user
starts interacting. And as they interact with those
specific features, the filters or the slider, we go and fetch
the code that is necessary. So now there are a ton of
features on this page that are never sent down. Because the user did
not care about them. In practice, though,
after the initial render, we’ll go figure out what’s
in the viewport and pre-load that content for you. And this avoids
unnecessary round trips. So to summarize, this is not
[INAUDIBLE] our state here. It’s not really literally
route level code splitting. It is much finer
grained, interaction driven late loading. And this allows us to send the
minimal code initially and stay within our budgets. So how do we avoid losing
those early clicks? And so the answer to this
lies in some the contents of that critical inline
JavaScript in the initial HTML. So diving into that, we
basically split event handling into these three parts. There is a tiny event
delegation library. It’s called jsaction. It’s open source,
available at this link. And this allows us to start
queuing up those early clicks. The second piece here
is the dispatcher. And this is the part that knows
how to figure out what handlers are needed for the user clicks. And the actual event handling
code needed for interactivity is all late loaded on demand. So the dispatcher is part
of the framework bootstrap. And that’s a really
important piece here. It is fast. The code is small. It’s less than 47 kilobytes. It’s fixed. It doesn’t bloat. And this is important. And one aspect here
is that this is different from traditional
style hydration. We don’t need to
redo all of the work that the server
has already done. And so this is what makes
the framework bootstrap fast. And this is a really
important piece of getting to this
constant initial size. A small bootstrap
loading, not loading any of the app specific logic. And enforcements are
actually important here. Enforcements may help us
keep this initial JavaScript constant and clean. So at Google, we have
forbidden steps test that make sure that application
code doesn’t sneak in here. At Facebook, they
use budget monitoring tooling to keep this clean. So we’ve talked a lot
about JavaScript and CSS. What about data? So for initial data, the
server will figure out what data is needed and it will
embed this data in the footer, in the initial page
itself, and it is streamed. And so this makes sure that
on single page app navigations and view navigations, the
client has the data that it needs already right there. For late loader data,
now this is powered by the component system. And component is a
self-contained piece of UI. It declares its JavaScript
CSS, as well as its data. It knows how to fetch its data. Components can be
composed in a hierarchy. And the children know
how to fetch their data. So this starts to fill in
more pieces of our picture here with data
fetching, like I said. And as part of the
initial data that’s sent early on in the
footer, in the initial HTML, and it is streamed. Late data is fetched
concurrently at the same time as fetching late code. And the component
system helps us here by telling us exactly
what code and data is needed. And resources are never
more than a round trip away. The next piece I
want to talk about is streaming
server-side rendering. This is really important
because this allows us to flush early chunks. So for example, the header bar
at the top of our Hotels page is flushed super
early and followed by the left navigation,
and parts of the body, and eventually, the
footer coming in. So hopefully, you
can sort of see the chunks, the
early chunks that are getting flushed in sequence. And this ensures that our
content starts rendering quickly and progressively. Initial data is sent
down the footer. It is streamed. So if there are some slow back
ends, we don’t wait for them. We go ahead and flush. And then there’s a small
script that patches up the server rendered HTML. And this keeps the
page interactive. So it’s interesting that
Google and Facebook are solving very similar problems here. And they have both arrived
at quite a similar end state. And I like to call this
smart server-side rendering with interaction
driven late loading. So this is a final important
aspect of this shared end state that we’ve arrived at. And this is not
having this problem of should it be cascade or
this waterfall effect that can happen from suboptimal
late loading of code and data. So naively using APIs
like dynamic import can get us into
the situation where we start rendering something and
then we encounter a code split point. And then we figure out we
need to go fetch something, so we go fetch it. And then we continue rendering
and then encounter another code split point. And then go fetch
the code for that. And so this sort of
results in this cascade. And this is the HTTP cascade
that our system prevents for both code and data. So looking under the covers
on how we get to the solution here, declaratively declaring
nodes in our dependency graph is a really important
piece of this. And so this is what
drives late loaded codes. Let’s look at an example using
code that is conditionally loaded– as an
example of when you’re running an A/B
experiment, you need to conditionally load code. So this is an example
from Facebook. In a naive situation, you
might do a dynamic import and conditionally load
your experiment code. But the Facebook syntax
here is declarative. And this makes it easy to inform
what’s needed ahead of time. This can be picked up by the
build system and the runtime. And this makes it
possible while fetching to know the full set of
gaps and get it all together in a single round trip. Now this is the Google side of
the syntax, a different design, but very similar principle. It’s a declarative
annotation that we put on the top of the file that
has the experimental version of the code. And this indicates
the experiment name and the original code path. And this hint is sufficient
for the build and the serving and the runtime to serve the
correct code at the right time when the user is
in an experiment. So at Google, for
effective code splitting, we separate our code into
separate phases for rendering and what’s needed
for interactivity. So first, we only load
what’s needed for rendering. And then later as
the user interacts, we go fetch the code that’s
needed for interactivity. Facebook has a bit more
sophisticated approach here. They have three phases
in addition to the two that I mentioned. And so they have an
additional third phase that shows like an
initial placeholder while loading before anything
has even been rendered. And so for example, this
could be showing a spinner before a bit of
content is ready. So a comprehensive
dependency graph underlies and powers all of this stuff. It knows all the code
and all the dependencies in the application. And this dependency graph is
consumed by the build time and it’s deployed and
consumed by the serving system and the runtime. And after bootstrap
in the initial page, the client has learned how to do
late loading without cascading by receiving a small
JavaScript library that knows how to do modular code loading. So this is the full
set of features. I’ve already talked about
almost everything here. The three things that I
haven’t touched yet on, and I’m not going to get into,
are the last three bullets on the right. Which is there is an
integrated A/B testing system that is deeply
integrated with all of this. There is serving of
minimal initial CSS. And at Google, we have
a CSS module system that allows us to figure
what this minimal CSS is. And finally, there’s
technology for images and deferring it and avoiding
unnecessary image bytes from being sent. So this incredible feature
set comes at a cost. It comes at the
cost of complexity. So it might not be the right
trade off for every app. And especially if the app
is simple or has mostly static content. And so this is the overlap
with what is on Facebook. And it’s really interesting
that it’s practically 100%. They achieved a very similar
feature set, but somewhat using different techniques. And it’s really interesting that
they’ve independently arrived at the same list,
even though they have completely different
backing, implementation, and design. And this validates our
approaches and the list itself. And this starts to give
us a general template for the desirable
characteristics of a scalable, feature rich app. Facebook also has some
unique sophisticated features that I don’t have
time to get into. I do recommend
checking out their talk at this link from the
recent Facebook conference. Now the Hotels example
was a demonstration of how our system works. It’s by no means perfect. They certainly have room
for improvement here. For example, just last week, we
saw that the one Google header bar is re-requesting
the forms that have already been requested
at quite an inopportune time. So how could we bring
all this cool stuff to the larger ecosystem? So luckily, there’s already
a bunch of work underway. Angular has been
attempting to do this, and they have been
exploring and figuring out how to bring some of
the Google feature set and integrating
that into Angular. React has been plowing away with
features like lazy, suspense, and most recently,
selective hydration. And then they have this
really cool data story with graph UL and relay. Airbnb is an example of a React
app using current ecosystem tooling and experimenting with
an early selective hydration technique. Again, I’d recommend watching
Jason and Hassan’s talk to see that demo and
to learn about what can work with today’s tooling. So we’ve shared a
vision of what we want to have available
in the ecosystem. And inspired from the techniques
of Google and Facebook, plus gaps we’re seeing in apps today. And this is
intentionally chaotic to indicate that there’s
a lot of moving parts here and there’s a lot of work
ahead and a long road. To really bring this
to the ecosystem, we need collaboration and deep
integration between frameworks and frameworks CLIs like
Angular and Create-React-App, as well as meta frameworks
like Next and Nuxt, as well as [INAUDIBLE] like
webpack and rollup, et cetera. It’s great that frameworks have
already started down this road. But meta frameworks have
a really big role here. They have a unique vantage
point, a unique position with access to both the
client and the server, control over the build
system and the deployment and the serving pipeline. And traditionally, they’ve
focused on the getting started experience and DX. But this is a much bigger
role and responsibility that we’d love to
see them succeed at. And as we’ve seen at Google
and Facebook, doing this requires an end-to-end
opinionated system. And this can include
enforcements, policies, budgets. And we’re not just giving
an academic talk here. We’re actively
participating in this space, focusing on constant initial
bundle size and smart code splitting and starting with
some simple initial changes to Next.js. So moving on to the next
segment of this talk, we just got some
exciting technology. [INAUDIBLE] come
to the ecosystem for better perf outcome? Let’s take a moment to
note that outcomes are not great for everyone in
the ecosystem today. There’s a lot of users that are
not having great experience. There’s a large
fraction of Chrome users in emerging markets. Device characteristics
are not great. They are actually quite
similar to large parts of middle America. And network conditions
can, of course, not be taken granted anywhere,
including right here. So these are lighthouse scores
from a popular meta framework. And clearly it is possible
to achieve good outcomes, as we can see from the
green box on the right. But what we really want
to do is come together as a community to
move this baseline and figure out how to
get more people shifted towards that right bucket. So these are loading metrics
for various frameworks. And it’s a recent study that’s
ongoing on our team, using [INAUDIBLE],, a tool integrated
into web page [INAUDIBLE].. It’s run on http
archived origins, about 4 million origins we
were able to do in four libraries and frameworks. They are using, like,
tens of thousands of URLs. We’ve hidden the names
of the frameworks, as that’s not important. And it’s too early to
draw conclusions yet. But I just wanted to note a
couple of early observations. [INAUDIBLE] scores are
not wildly different. They’re actually quite similar. And frameworks don’t have a big
role in first contentful paint. But they do between the
difference between time to interactive and
first contentful paint. Because that kind of dictates
what the hydration [INAUDIBLE] for that framework. However, we’re finding
that this difference is not widely varying. It’s quite similar, still. So our team has started looking
at how to serve application JavaScript better. But as we dug in further,
we found some surprises. In practice, there’s a ton of
truly unnecessary JavaScript that’s getting shipped down. Things like polyfills that
the browser cannot use. Collectively, our
team has spent a lot of time looking at deep
diving into bundles, looking at breakdowns. And we’re finding that it’s
not unusual to see 20% to 30% of unnecessary JavaScript. And another
interesting data point is that NPM modules are
a big part of the app. Google and Facebook didn’t have
to deal with these problems. So digging in, these are
the top three reasons that we’re finding for
unnecessary JavaScript. The first and foremost
is over transpilation, both in the first party
application code, as well as code in the installed
NPM modules. Second, polyfills–
large volumes and large blocks of
duplication there, and finally, both
over-transpilation and polyfills all lead
to this duplication of modules in the bundle. NICOLE SULLIVAN: So NPM did a
study of a few thousand client side applications. And they found that
97% of bundle size was installed node modules. We’ve personally seen a high
variation in that number. But for almost everyone,
it’s greater than 60% to 70%. NPM also said that the
average application has 1,000 NPM dependencies. And it is not at all uncommon
to have 2,000 NPM dependencies. When we learned this, we had
this moment of surprised, not surprised. Because our own experiences
developing apps, we found that we used all
sorts of things from NPM. But at the same time, I
think the magnitude of it was a little bit surprising. Now I want to be super
clear– the answer is not to stop sharing code. That’s what makes our
community stronger. At this point, we are
able to build apps. And we focus on
the thin layer that makes our app
unique, that’s going to make users want
to use it, that’s going to make it different
or exciting or better. We don’t spend a lot
of time on boilerplate, because we have a whole
pile of boilerplate that we can just use from NPM. This is absolutely a positive. So you might want to know
what kind of dependency bloat you have. Maybe you have big dependencies. With webpack bundle
analyzer, you can figure out how big
your dependencies are. Because they actually show
up bigger in this view. You might also have
duplicated code. Source Map Explorer shows
you all the details of you’re minified bundle. For example, you can see that
this application contains two copies of React. You can also check if your
dependencies make sense by route. In particular, if you’re doing
route-based code splitting, webpack bundle
analyzer shows you that this application includes
three copies of moment.js in different routes. Now there might be a case
where that makes sense. But it probably doesn’t. SHUBHIE PANICKER: So how could
it be easier to ship JavaScript to modern browsers? So differential
loading is a technique that works today for enabling
loading of different bundles to different browsers based
on their support level. How does it work today? So today, the module
nomodule pattern works well with
tooling available. There’s a link
here with details. But the core idea
here is that you generate a second bundle
using [INAUDIBLE] preset and make a second configuration
for ES 2015 Plus code And then update the
HTML as shown here. Set appropriate entry
points using module for modern browsers and
nomodule for older browsers. Module, nomodule really
works in the real world. It’s especially effective
when meta frameworks and CLIs support it out of the box. Angular’s just launched
support for this in Version 8. And they’re seeing big
wins, with the user seeing significant size wins. And this is the slide that was
presented at the recent Angular conf. Basically, they’re finding that
there are savings of anywhere from 7 to 20%. In apps with more polyfills,
this can be up to 30%. So as Nicole said, a
large part of our app is install NPM modules and
tools on transpiling these by default. So there is
this widespread expectation that these NPM
modules contain ES5. And this means huge
portions of our apps are stuck in ES5, even though
modeling and delivering techniques are in place
for shipping modern syntax. So we only need to tackle
this on the publishing side for NPM modules. So today, for example,
in package.json, it shows you what
version of Node a module requires, but no
indication for which version of JavaScript. So clearly there’s
something missing here. We need more information. So this is a current proposal
that our team is pursuing. What if we could add a
syntax field in package.json to directly indicate
ES module support? And this is not literally
about ES modules. ES module is a good
proxy for loading JavaScript files with modern ES
2015 Plus features, everything from async await, classes,
arrow functions, to fetch and promises. However, in a few
years from now, this might leave us in
an awkward position. So we really need
to get creative and think of what could
be a longer term solution. So we don’t know what the
long term solution is. We have folks on our team that
are deep in this space thinking about solutions. I have their Twitter
handles here. Feel free to reach out if
you have thoughts and ideas. But at the very least, we
think we want these properties in a compelling long
term solution designed for an evolving set
of platform APIs, not penalizing
[INAUDIBLE] by sending tons of unnecessary
code to them, aligning with edge caching
and performance needs, and compatible
with existing tools without significant
modifications. There’s a few things module
authors can do today. So I encourage you to
publish modern JavaScript when possible. And even compiling to modern
JS when writing TypeScript, and shipping down
level code as a backup. So let’s revisit where
we want things to be. We’ve added a few things
since the last section. We’ve added differential
bundling, publishing modern down level JavaScript,
and adding browser primitives. So we all have really
important roles to play here. Frameworks have a
really big role. And we’ve sort of
taken our guess at highlighting various
areas where they are helping or wanting to help. Meta frameworks have a big role. Same for bundlers, especially
in both code splitting, as well as differential
loading side app authors, module authors, as well
as package managers. And finally, we have a
big role here, as well, in terms of making all of
these people successful, everything from
shipping new browser primitives to direct PRs
to open source projects. NICOLE SULLIVAN: It’s
going to take a rainbow to get this done. So let’s go back for a moment
to the dream we talked about in the beginning. We hope for a world
where feature richness and performance wouldn’t
be so squarely opposed to one another. We believe in the possibility. And we’re ready to
make that happen, both with PRs to the ecosystem
and with a framework fund. At CDS this year, we announced
that we’re starting a framework fund to help support
the kind of work that we’ve talked
about here today. If you are working in
framework and tooling, or any of the areas that
we’ve talked about, this link that
we’re sharing is how you can apply to
the framework fund and get Chrome’s continued
support for your good work. Thank you. [APPLAUSE] [GOOGLE LOGO MUSIC PLAYING]




Comments

Leave a Reply

Your email address will not be published. Required fields are marked *