Techno Barje2023-10-23T19:00:00.000Zhttp://techno-barje.fr/Alexandre PoirotKnowledge versus market -- Sharing versus Ease of use.2023-10-23T19:00:00.000Zhttp://techno-barje.fr/post/2023/10/23/knowledge-versus-market<p>This blog post is a continuation of the previous one about the <a href="/post/2023/10/20/history-of-edition-and-publishing-in-web-browsers/">history of editing and publishing in web browsers</a>.</p>
<p>I'm now going to focus on the significant shift of vision about the Web between the two first browsers: WorldWideWeb versus Mosaic.</p>
<p>While the very first, WorldWideWeb, defined the web as being editable by default,
Mosaic restricted the browser to become read-only.
All features related to web page editing disappeared in Mosaic.</p>
<p>Later down the road, Netscape 4 re-introduced a web page editor.
But as Netscape copied the interpretation of the Web from Mosaic,
the web pages were no longer editable by default.
This somewhat divided users in two distinct groups: readers versus authors.
The editor in Netscape 4 was an external feature of the browser, opening a distinct window.</p>
<p>An interesting fact is that both WorldWideWeb and Netscape 4 were superseded by Mosaic and Internet Explorer,
which were focusing strictly on read-only vision for the Web.</p>
<h1 id="worldwideweb-the-very-first-browser">WorldWideWeb, the very first browser.</h1>
<p>WorldWideWeb browser and the premises of the web was created in the lab called CERN, the European Organization for Nuclear Research.</p>
<p>Their browser and the web spread within various research labs and universities.
The main audience were scientists and librarians.</p>
<p><a href="https://home.cern/fr/science/computing/birth-web/short-history-web">This page</a> does a very nice summary of these first usages of the web:</p>
<blockquote>
<p>The Web was originally conceived and developed to meet the demand for automated information-sharing between scientists in universities and institutes around the world.</p>
</blockquote>
<p><a href="https://www.ahp-numerique.fr/2021/12/02/edition-scientifique-libre-acces/">This other page</a>, in french, describes at length how scientists shared information from the 60s.
The part about the web ("1.6 Le web (1984-1996)") is also an interesting read.</p>
<p>From these extracts, it isn't clear if the users were really editing web pages from the browser.
It looks like it was mostly meant to query large databases of documents (scientific articles) and information (phonebooks).
It sounds like it was already going into the direction of a read-only web.</p>
<p>Otherwise, we can easily explain why the web was originally restricted to scientists and librarians.
This browser only worked on <a href="https://en.wikipedia.org/wiki/NeXT">NeXT computers</a>.
This was a serious limitation to a widespread audience as these computers were targeting higher education and business markets.</p>
<h1 id="violawww-the-second-browser">ViolaWWW, the second browser</h1>
<p>This browser got very little coverage in the history of browsers, but may have had a significant impact on the future of the web.</p>
<p>Many browsers appeared after WorldWideWeb. <a href="http://9p.sdf.org/who/tweedy/ancient_browsers/">This web page</a> is archiving the list of all of those.</p>
<p>But <a href="https://en.wikipedia.org/wiki/ViolaWWW">ViolaWWW</a> was particularly important for three reasons:</p>
<ul>
<li>The CERN suggested to use this browser instead of WorldWideWeb and quickly became the default browser in the lab <a href="https://www.w3.org/People/Berners-Lee/FAQ.html#browser">source</a>.</li>
<li>This browser, while becoming popular for its additional features (scripting and stylesheets), also regressed all the editor aspects. It looks like it only rendered the web pages and disallow any editing.</li>
<li>The main author of Mosaic (Marc Andreessen) was shown ViolaWWW just before initiating the Mosaic project <a href="https://www.w3.org/People/Berners-Lee/FAQ.html#Mosaic">1st source</a> <a href="https://www.w3.org/DesignIssues/TimBook-old/History.html">2nd source</a></li>
</ul>
<p>ViolaWWW may have been the very precise project influencing the future of browser as read-only tools to browse the web.
That mostly by inspiring the creation of Mosaic, which also got rid of editing features to focus on reading and browsing the web.</p>
<h1 id="mosaic-the-first-widespread-browser">Mosaic, the first widespread browser.</h1>
<p><a href="https://en.wikipedia.org/wiki/Mosaic_(web_browser">Mosaic</a> was developed at the National Center for Supercomputing Applications (NCSA).</p>
<p>It was the first to reach a very wide audience up to the mass-market.
The main difference with past browsers was its compatibility with many hardwares and Operating Systems.
It was the very first to support Unix, MacOS and Windows.
The team behind it also focused a lot on making it easy to install and use.</p>
<p>This browser sealed this vision of the web by becoming much more popular.
Mosaic, like ViolaWWW really focused on browsing the web. It contained no feature around editing the web pages.</p>
<h1 id="knowledge-sharing-versus-market-and-ease-of-use">Knowledge sharing versus Market and ease of use</h1>
<p>Now it may be interesting to compare the vision of the web promoted at CERN/WorldWideWeb versus NCSA/Mosaic.</p>
<p>The CERN described the web in a simple and generic way:</p>
<blockquote>
<p>The WorldWideWeb (W3) is a wide-area hypermedia information retrieval initiative aiming to give universal access to a large universe of documents.
<a href="https://info.cern.ch/hypertext/WWW/TheProject.html">source</a></p>
</blockquote>
<p>While the original web at CERN was meant to ease sharing the knowledge between scientists.
It was probably not intentionally targeting any larger audience.</p>
<p>The web of Mosaic was clearly shifting to a wide audience of ordinary people.
But the way they were promoting the Internet was quite different:</p>
<blockquote>
<p>Mosaic offers a window into the Internet, presenting content and services to users in a friendly, interactive, point-and-click way.
<a href="http://mosaic.mcom.com/lowres_www/lowres/backgrounder/mosaic.html">source</a></p>
</blockquote>
<blockquote>
<p>Mosaic Communications Corporation intends to support companies and consumers...to accelerate the coming of this new era with tools that ease and advance online communications.
<a href="http://mosaic.mcom.com/lowres_www/lowres/backgrounder/future.html">source</a></p>
</blockquote>
<p>Mosaic was promoting a whole market/ecosystem for the Internet.
It would be made of companies providing services to consumers.
It was drastically different from CERN phrasing: "giving universal access to large universe of documents".</p>
<p>I imagine we could debate at length about these two ways of framing the web, but I would like to instead focus on the most important appeal of Mosaic, which explained its success: The ease of use.</p>
<p>Mosaic surely gained lots of traction thanks to its support of most hardwares and operating system,
but it also polished its ease of use. Unfortunately it only focused on browsing and reading the web.
But I'm wondering, what if Mosaic also spent some time in these early days on helping the first users of the web to create and edit their own websites??</p>
<p>Instead, it promoted companies to build the services. Building the services here meant to build the web pages.
This ultimately delegated content creation to experts in the early days.</p>
<p>What if Mosaic focused on the ease of use of web page editing?
What if Mosaic continued along the lines of Tim Berners-Lee original vision of the web described <a href="https://www.w3.org/DesignIssues/Editor.html">over here</a>.</p>
<blockquote>
<p>If you think surfing hypertext is cool, that's because you haven't tried writing it.</p>
</blockquote>
<blockquote>
<p>The Web is universal and so should be able to encompass everything across the range from the very rough scribbled idea on the back of a virtual envelope to a beautifully polished work of art. </p>
</blockquote>
<blockquote>
<p>A first assumption, by the way, is that you have modeless interface in which browsing and editing are not separate functions. If to edit a page, you have to switch from browsing mode to editing mode, then you have lost already.</p>
</blockquote>
<p>That's the vision I'd like to elaborate in 2023. Give a second change for the web to be fully editable (almost) by default.</p>
<p>Note: I published this article late after writing it. I actually wrote it before the release of <a href="https://a16z.com/the-techno-optimist-manifesto/">Marc Andreessen's manifesto</a>, which introduced lots of debate about his vision of tech, like <a href="https://davekarpf.substack.com/p/why-cant-our-tech-billionaires-learn">here</a>. This is typically the kind of discussion, which I find enlightning, but I really wanted to focus on actual actionnable Web features.</p>
The History of editing and publishing in web browsers2023-10-20T09:00:00.000Zhttp://techno-barje.fr/post/2023/10/20/history-of-edition-and-publishing-in-web-browsers<p>Some web browsers used to offer built-in features to <strong>edit</strong> and <strong>publish</strong> web pages.</p>
<p>You could <strong>edit</strong> any web page. Modify the text, the formatting and styling, attach images, link to another page...</p>
<p>After having done these changes, you could <strong>publish</strong> them to the web server so that others can see your contribution.</p>
<p>I'm going to highlight that this was only possible for a limited period of time, on browsers with a limited audience.</p>
<h1 id="worldwideweb-1990-1994">WorldWideWeb (1990-1994)</h1>
<p>The Web was originally created within a European research lab called CERN, European Organization for Nuclear Research.<br>This is where was developed the very first browser called "WorldWideWeb".<br>The original documentation pages are still available online!<br>The following quote highlights the read and write capabilities of this browser.</p>
<blockquote>
<p>The "WorldWideWeb" application for the NeXT is a prototype Hypertext browser/editor.<br><a href="https://info.cern.ch/hypertext/WWW/NeXT/WorldWideWeb.html">source</a></p>
</blockquote>
<p>The main author of this application, Tim Berners Lee also insists about the editor aspect in this retrospective:</p>
<blockquote>
<p>The first web browser - or browser-editor rather - was called WorldWideWeb [...]<br><a href="https://www.w3.org/People/Berners-Lee/WorldWideWeb.html">source</a></p>
</blockquote>
<p>And another time in this note:</p>
<blockquote>
<p>If you think surfing hypertext is cool, that's because you haven't tried writing it.<br><a href="https://www.w3.org/DesignIssues/Editor.html">source</a></p>
</blockquote>
<p>In 2019, the CERN organized a project to rebuild WorldWideWeb using today's web technologies.
While doing so, they published <a href="https://worldwideweb.cern.ch/worldwideweb/">a website</a> describing the original vision of the Web
and its related browser application in details.</p>
<p>This website also insists a lot about the editor side of the browser:</p>
<blockquote>
<p>Today it's hard to imagine that web browsers might also be used to create web pages.
It turned out that people were quite happy to write HTML by hand—something that Tim Berners-Lee and colleagues never expected.
They thought that some kind of user interface would be needed for making web pages and links. That's what the WorldWideWeb browser provided.</p>
</blockquote>
<p>You can test this browser on this project <a href="https://worldwideweb.cern.ch/browser/">web page</a>.
This works slightly better on Chrome than Firefox, but I must warn you, it is quite buggy. There is many cursor issues.</p>
<p><img src="/public/editable-web/worldwideweb/worldwideweb.png" alt="Screenshot of editing of the home page in WorldWideWeb"></p>
<p>Nonetheless, it is quite stunning to see how this browser actually works.<br>You can move the caret anywhere, in all the web pages, and modify the text anywhere.<br>Do some basic styling, copy and paste text, ...<br>Exactly like Microsoft Word / Google Docs, but against remote web pages!</p>
<p>But... it had some serious limitation.<br>While you can edit all the pages, you could only save your changes to local files.<br>You could edit, but not publish your changes.</p>
<p>This is mentioned on this documentation page about how to create a new page:</p>
<blockquote>
<p>You can edit existing documents using WWW so long as they are files. You cannot normally edit information retrieved from remote databases.<br><a href="https://info.cern.ch/hypertext/WWW/NeXT/MakingDocuments.html">source</a></p>
</blockquote>
<p>This actually relates to some implementation detail, which was clarified by Tim Berners Lee:</p>
<blockquote>
<p>It would browse http: space and news: and ftp: spaces and local file: space,
but edit only in file: space as HTTP PUT was not implemented back then.<br><a href="https://www.w3.org/People/Berners-Lee/WorldWideWeb.html">source</a>
(I will followup about this in another blog post)</p>
</blockquote>
<p>None of the future browsers ever re-implemented such behavior: editable by default.</p>
<h1 id="mosaic-1993-1997">Mosaic (1993-1997)</h1>
<p>The second most notable browser, "Mosaic", drastically changed the vision of the Web.<br>You could only open and browse HTML pages. All the editing features disappeared in this browser.<br>This introduced the URL bar which wasn't visible in WorldWideWeb.</p>
<p><img src="/public/editable-web/mosaic/view-source.png" alt="Screenshot of view source dialog in Mosaic"></p>
<p>You could only open the HTML sources internally (via view source feature), or via an external editor application.
<a href="https://github.com/alandipert/ncsa-mosaic/blob/af1c9aaaa299da3540faa16dcab82eb681cf624e/src/gui-dialogs.c#L2704">Source code</a></p>
<p>Mosaic influenced much more the long term future of web browsers.
Looking at its UI, you can see that it is very similar to today's browser UI.</p>
<p>Note that you can run Mosaic on Linux!
But you have to build it from <a href="https://github.com/alandipert/ncsa-mosaic">sources available on GitHub</a>.
It can easily fail building, but see <a href="https://github.com/alandipert/ncsa-mosaic/issues/14">this issue</a> to address the failures.</p>
<h1 id="early-netscape-versions-up-to-2-1994-1996">Early Netscape versions up to 2 (1994-1996)</h1>
<p>"Netscape" started being released one year after the first version of Mosaic.
Netscape took the lead on being the most popular browser, but still didn't reimplement page editing in any way.</p>
<p><a href="https://www.webdesignmuseum.org/old-software/web-browsers/netscape-navigator-2-0">Screenshots for Netscape Navigator 2</a></p>
<h1 id="netscape-3-1997">Netscape 3 (1997)</h1>
<p>Netscape 3, via the Gold edition, shipped the "Netscape Editor" feature.
Pressing "Ctrl-E" would allow you to edit any page in it!
You could then save your modifications in a local file, but still not publish it to the remote server.<br>You could edit, but not publish your changes.
This was really similar to the behavior of WorldWideWeb browser, except that pages aren't editable by default.
The editing had to be done within a distinct application/window.</p>
<p><img src="/public/editable-web/netscape-3/composer.png" alt="Screenshot of Editor in Netscape 3">
<a href="https://www.webdesignmuseum.org/old-software/web-browsers/netscape-navigator-3-04-gold">source</a></p>
<h1 id="netscape-4-june-1997-2000">Netscape 4 (June 1997-2000)</h1>
<p>Netscape 4 started exposing a publish feature while renaming "Netscape Editor" into "Netscape Composer".
<img src="/public/editable-web/netscape-4/composer.png" alt="Screenshot of Composer in Netscape 4">
Notice the HTTP -or- FTP upload methods. (I'll followup about the HTTP upload method in a another blog post)<br>This finally addressed the shortcomings of WorldWideWeb browser.
We could easily publish the changes we just made on the page, as soon as you had the necessary
credentials on the remote web server for uploading files.</p>
<p>Unfortunately, this is also the last popular Netscape product.
Netscape had 80% market share in 1997, but only 13% in 2000!
<a href="https://en.wikipedia.org/wiki/Usage_share_of_web_browsers#Older_reports_(pre-2000)">1st source</a>
<a href="https://en.wikipedia.org/wiki/Netscape_Navigator#/media/File:Netscape-navigator-usage-data.svg">2nd source</a></p>
<h1 id="netscape-6-2000-2002">Netscape 6 (2000-2002)</h1>
<p>Note that Netscape 5 was never released. The version was dropped in favor of Netscape 6.</p>
<p>Surprisingly Netscape 6 dropped the publish feature from Composer:
<img src="/public/editable-web/netscape-6/composer.png" alt="Screenshot of Composer in Netscape 6">
Thus, getting back to the behavior of Netscape 3.
You can edit changes locally, but you can no longer publish the changes to the web server.</p>
<p>Mention in Netscape 6 troubleshooting:</p>
<blockquote>
<p>Problem:
The Editor application does not support the Publish feature.
<a href="https://www-archive.mozilla.org/unix/solaris#BrowserGeneral">source</a></p>
</blockquote>
<h1 id="netscape-7-2002">Netscape 7 (2002)</h1>
<p>Surprisingly again, Netscape 7 revived the publishing feature in Composer:
<img src="/public/editable-web/netscape-7/composer.gif" alt="Screenshot of Composer in Netscape 7">
But at this point, netscape was under 4% in market shares.
<a href="https://en.wikipedia.org/wiki/Usage_share_of_web_browsers#Older_reports_(pre-2000)">source</a></p>
<p>The complex history between Netscape 4, 5, 6 and 7 is probably related to the move
of Netscape to an open source codebase. This was initiated by the Mozilla project, which started in 1998,
one year after the release of Netscape 4. <a href="https://web.archive.org/web/20140603235609/http://archive.wired.com/techbiz/media/news/1998/11/16466">source</a>
This may be the reason why the version 5 was cancelled and why some features were dropped in Netscape 6.</p>
<p>On the plus side, today, we are able to track the development of the publishing feature which was re-implemented from scratch in the open source codebase.
The latest version of Netscape 6 was based on Mozilla 0.9.4.1. <a href="https://en.wikipedia.org/wiki/Netscape_6#Release_history">source</a>
The feature was tracked by <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=88208">this bugzilla ticket</a>,
the first patches landed into Mozilla 0.9.7 (November 2001) and the very last patch landed into Mozilla 1.0 (March 2002).
Netscape 7 was later released in August 2002 based on Mozilla 1.0. <a href="https://en.wikipedia.org/wiki/Netscape_7">source</a></p>
<h1 id="browsers-landscape-from-2002-till-now">Browsers landscape from 2002 till now</h1>
<p>In 2002, "Internet Explorer" had already around 90% of market shares.
And Internet Explorer did not have any editing capabilities.</p>
<p><img src="/public/editable-web/internet-explorer/ie6.png" alt="Screenshot of Internet Explorer 6"></p>
<p>A few years later, in 2006, the first "Firefox" version was released and also focused only on browsing and reading the web (Like Internet Explorer).
"Netscape Composer" was never reintroduced in Firefox.
<img src="/public/editable-web/firefox-1/firefox.png" alt="Screenshot of Firefox 1"></p>
<p>In 2008, "Chrome" doubled down on stripping down browser features and UI to delegate even more capabilities to the websites.
<img src="/public/editable-web/chrome-1/chrome.png" alt="Screenshot of Chrome 1"></p>
<h1 id="bonus-seamonkey-2006-today">Bonus: Seamonkey (2006-today)</h1>
<p>A browser still exists today, in 2023, with web page editing <strong>and</strong> publishing, exactly like Netscape 7!</p>
<p>Believe it or not, but a group of contributors are maintaining the original open source codebase of Netscape over the decades!!<br>This browser is <a href="https://www.seamonkey-project.org/">SeaMonkey</a>.
Like Netscape, it includes: a Web browser, but also a mail reader (ThunderBird), a newsgroup reader, IRC chat, and last but not least, one HTML editor (Composer).
This project is still active and released a new version in September.</p>
<p>I encourage everyone to give it a try. This is really amazing to see all these old and complex softwares still working today.
It is also the easiest way to run a Web browser with full editing and publishing support on modern computers.
The icing on the cake is that, as it is based on latest version of Gecko (the Web engine of Firefox) it benefits from almost the same support of Web standards as Firefox.</p>
<p><img src="/public/editable-web/seamonkey-2.53/composer.png" alt="Screenshot of Composer on SeaMonkey"></p>
<h1 id="conclusion">Conclusion</h1>
<p>Web page editing and publication features through the browser UI was only exposed to a wide audience during three years (1997 to 2000).
It was actually even shorter than that as it was during the Netscape 4 era, where their market share fall part.</p>
<p>I'll investigate in a following blog post how different WorldWideWeb's vision of the web was compared to all subsequent browsers. And the consequences it had on how the Web is used starting from Mosaic.</p>
<p align="center">Overview of browser history</p>
<p><img src="/public/editable-web/history-of-edition-and-publishing-in-web-browsers.drawio.png" alt="Overview of all this history of browsers"></p>
<p><a href="https://app.diagrams.net/#Uhttp%3A%2F%2Ftechno-barje.fr%2Fpublic%2Feditable-web%2Fhistory-of-edition-and-publishing-in-web-browsers.drawio">source for this diagram</a></p>
Declarative Web Component to replace build-time HTML templates2023-10-05T12:20:00.000Zhttp://techno-barje.fr/post/2023/10/05/declarative-web-component<p>Recently I moved away from Jekyll to build this blog (<a href="/post/2023/10/03/minimal-blog-post-setup/">see more</a>).<br>While doing so I also moved away from the traditional HTML templates.<br>Instead I started using a "single file declarative web component".<br>The nice outcome is that the HTML page now mostly contains the text content of the blog post!<br>Do not hesitate to view the source of this page :)</p>
<p>This idea of "single file Web Component" actually comes from Tomasz Jakut (CK Editor) very simple JavaScript loader <a href="https://ckeditor.com/blog/implementing-single-file-web-components/">described over there</a>.</p>
<h1 id="single-file">"Single File"</h1>
<p>In one file you can bundle the HTML, the CSS and the JavaScript for a given Web Component.<br>This is handy as you only have one file to register.<br>On this blog, all the HTML pages displaying a blog post will uses a unique Web Component to implement and display the blog design/template.<br>Instead of having a build step processing tool duplicating the template on every single HTML page,
the browser engine use this unique Web Component to display all the blog post the same way.</p>
<p>Here is an overview of this Web Component.<br>You can see the header with the blog image, the navigation links, the footer,
and finally in middle of this, a <code><slot></code> to define where the blog post content should be put.</p>
<pre><code><template>
<header>
...
</header>
<nav role=navigation>
<ul>
<li><a href="/">Index</a></li>
<li><a href="/archives/">Archives</a></li>
<li><a href="/resume/">About me/Resume</a></li>
</ul>
</nav>
<div id="content"><slot>ARTICLE</slot></div>
<footer><p>Copyright &copy; 2023 - Alexandre Poirot</p></footer>
</template>
<style>
header { background-image: url("/images/header.png"); }
nav { background: black; color: white; }
</style>
</code></pre>
<h1 id="declarative">"Declarative"</h1>
<p>This refers to <a href="https://github.com/WICG/webcomponents/blob/gh-pages/proposals/Declarative-Shadow-DOM.md#self-sufficient-html">Declarative-Shadow-DOM</a>
and <a href="https://github.com/WICG/webcomponents/blob/gh-pages/proposals/Declarative-Custom-Elements-Strawman.md">Declarative-Custom-Elements-Strawman</a> proposal... in some way.<br>The idea is being able to load it from the HTML page, without JavaScript.</p>
<p>On this web site, the Web Component used on all blog post pages is registered like this:</p>
<pre><code><link rel="component" href="/blog-article.wc">
</code></pre>
<p>And will implement the <code><blog-article></code> DOM Element used in the HTML page.
Unfortunately, as this isn't part of any implemented standard, I'm using Tomasz's naive JavaScript loader to make this work.</p>
<h1 id="example-of-a-blog-post-html-page">Example of a blog post HTML page</h1>
<p>The traditional header of any HTML page in 2023:</p>
<pre><code class="language-html"><!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
</code></pre>
<p>The blog post title followed by the blog title.</p>
<pre><code class="language-html"> <title>Using the fediverse/Mastodon for comments on blogs - Techno Barje</title>
</code></pre>
<p>Then, Tomasz JS loader, which will implement the support for <code><link rel=component></code>.</p>
<pre><code class="language-html"> <script src="/loader.js"></script>
</code></pre>
<p>The declaration of the <code><blog-article></code> Web Component</p>
<pre><code class="language-html"> <link rel="component" href="/blog-article.wc">
</code></pre>
<p>The unique CSS for the whole blog, and the end of <code></head></code> section.</p>
<pre><code class="language-html"> <link href="/document.css" rel="stylesheet" type="text/css">
</head>
</code></pre>
<p>Now this is where it becomes interesting.<br>The <code><blog-article></code> component will implement the overall blog design/template.<br>So that the HTML page can focus only on the specific content of that specific HTML file:</p>
<ul>
<li>The blog post title and link to it,</li>
<li>Its publish date,</li>
<li>The actual content of the blog post.</li>
</ul>
<pre><code class="language-html"><body>
<blog-article>
<div class="entry-title">
<h1><a href="/post/2023/10/05/fediverse-for-comments-on-blogs/">Using the fediverse/Mastodon for comments on blogs</a></h1>
<time datetime="2023-10-05T10:00:00.000Z" pubdate>Oct 05, 2023</time>
</div>
<article>
... The content of a blog post ...
</article>
</blog-article>
</body>
</html>
</code></pre>
<p>And that's it. We close the <code></html></code> right after.</p>
<h1 id="outcomes">Outcomes</h1>
<p>My hope is that by simplifying the HTML files to the barebone actual text content, we can revive the straight edition of HTML files!</p>
<p>In 2023, everyone is still using either:</p>
<ul>
<li>Wordpress/Medium/write.as/Fediverse to publish content when you don't want to care about the hosting side of things,</li>
<li>Jekyll/Hugo/writefreely.org or more and more custom build scripts for the tech-savvy who are at ease running command lines and managing the (self) hosting.</li>
</ul>
<p>Except a few web survivalist, I've not seen anyone edit HTML page for publishing text. HTML is now some kind of assembly language only generated or at best assembled by programs.</p>
<p>I'll keep bloging about this topic as this Declarative Web Component trick is only one small thing. We can do much more to get back to the roots of the editable web.</p>
Using the fediverse/Mastodon for comments on blogs2023-10-04T09:30:00.000Zhttp://techno-barje.fr/post/2023/10/04/fediverse-for-comments-on-blogs<p>Yesterday I moved away from Jekyll to build this blog (<a href="/post/2023/10/03/minimal-blog-post-setup/">see more</a>).<br>But while doing that, I also moved away from Disqus for handling the comments on my blog.<br>This wasn't a trivial move as it was hard to keep the old comments.
I realized late that I was bound to this provider.</p>
<p>A nice list of self-hosted solutions is available on <a href="https://lisakov.com/projects/open-source-comments/">lisakov.com</a>.
But I was scared about the maintenance and hosting cost of such option.</p>
<p>As a long time user of Matrix, I gave a try to <a href="https://cactus.chat/">Castus Comments</a>, but it was a bit too complex to manage.</p>
<p>I finally ended up discovering a very simple snippet on <a href="https://carlschwan.eu/2020/12/29/adding-comments-to-your-static-blog-with-mastodon/">carlschwan.eu</a>.
This uses the Fediverse to expose comments on a static blog.
So all the credits go to <a href="https://carlschwan.eu/">carlschwan.eu</a>, <a href="https://mastodon.online/@veronica/110028499674748958">@veronica@mastodon.online</a> and <a href="https://mastodon.blaede.family/@cassidy">@cassidy@blaede.family</a>.</p>
<p>This is just perfect:</p>
<ul>
<li>super simple, a couple tens of lines of JavaScript.<br>This is also very lightweight for people visiting the blog. Much much lighter than Disqus!</li>
<li>I used to announce new blog post on Twitter/Mastodon.<br>The comments sent on this announcement message are now merged with comments visible on the blog!</li>
<li>No hosting to maintain as it relies on the fediverse server.</li>
<li>No longer bound to a unique service provider. I can move to another fediverse server.</li>
<li>The moderation is part of the fediverse, so I should be able to manage the comments.</li>
</ul>
<p>I can only think about three downsides so far:</p>
<ul>
<li>It requires people to be on the fediverse to be able to comment.<br>But any service compatible with Mastodon. You no longer have to register against Disqus.<br>Also, you have to write your comment on the Fediverse web page. It would require more work and maintaince to offer that from the page.</li>
<li>I have to manually create a new message on my mastodon server for each new blog post.<br>But that's something I was doing anyway to announce new blog posts...</li>
<li>Mastodon doesn't allow migrating existing messages to a new server.<br>So it may be hard to keep the past message while migrating to something new.<br>But as this is an open service with many open APIs, it should still be easy to export with custom scripts.\</li>
<li>EDIT: Commnents from people using a locked account won't appear.</li>
</ul>
Trabant Calculator - A data visualization of TreeHerder Jobs durations2019-09-17T17:00:00.000Zhttp://techno-barje.fr/post/2019/09/17/trabant-calculator<p><a href="https://ochameau.github.io/trabant-calc/">Link to this tool</a> (its <a href="https://github.com/ochameau/trabant-calc/">sources</a>)</p>
<h2 id="what-is-this-tool-about">What is this tool about?</h2>
<p>Its goal is to give a better sense on how much computations are going on in Mozilla automation.
Current <a href="https://treeherder.mozilla.org/#/jobs?repo=mozilla-central">TreeHerder UI</a> surfaces job durations, but only per job. To get a sense on how much we stress
our automation, we have to click on each individual job and do the sum manually.
This tool is doing this sum for you.
Well, it also tries to rank the jobs by their durations. I would like to open minds about the possible impact on the environment we may have here.
For that, I am translating these durations into something fun that doesn't necessarily make any sense.</p>
<h2 id="what-is-that-cars-gif">What is that car's GIF?</h2>
<p>The car is a <a href="https://en.wikipedia.org/wiki/Trabant">Trabant</a>. This car is often seen as symbolic of the former East Germany and the collapse of the Eastern Bloc in general. This part of the tool is just a joke. You may only consider looking at durations, which are meant to be trustable data. Translating a worker duration into CO2 emission is almost impossible to get right. And that's what I do here: Translate worker duration into a potential energy consumption, which I translate into a potential CO2 emission, before finally translating that CO2 emission into the equivalent emission of a trabant over a given distance in kilometers.</p>
<h2 id="power-consumption-of-an-aws-worker-per-hour">Power consumption of an AWS worker per hour</h2>
<p>Here is a really weak computation of Amazon AWS CO2 emissions for a t4.large worker.
The power usage of the machines these workers are running on could be 0.6 kW.
Such worker uses 25% of these machines.
Then let's say that Amazon <a href="https://en.wikipedia.org/wiki/Power_usage_effectiveness">Power Usage Effectiveness</a> is 1.1.
It means that one hour of a worker consumes <strong>0.165 kWh</strong> (0.6 * 0.25 * 1.1).</p>
<h2 id="co2-emission-of-electricity-per-kwh">CO2 emission of electricity per kWh</h2>
<p>Based on US Environmental Protection Agency (<a href="https://www.epa.gov/sites/production/files/2018-02/documents/egrid2016_summarytables.pdf">source</a>), the average CO2 emission per MWh is 998.4 lb/MWh.
So 998.4 * 453.59237(g/lb) = 452866 g/MWh, and, 452866 / 1000 = <strong>452 g of CO2/kWh</strong>.
Unfortunately, the data is already old. It comes from a 2018 report, which seems to be about 2017 data.</p>
<h2 id="co2-emission-of-a-trabant-per-km">CO2 emission of a Trabant per km</h2>
<p>A Trabant emits <strong>170 g of CO2 / km</strong> (<a href="http://www.trabantforum.de/ubb/Forum1/HTML/007230.html">source</a>). (Another [source] reports 140g, but let's say it emits a lot.)</p>
<h2 id="final-computation">Final computation</h2>
<pre><code>Trabant's kilometers = "Hours of computation" * "Power consumption of a worker per hour"
* "CO2 emission of electribity per kWh"
/ "CO2 emission of a trabant per km"
Trabant's kilometers = "Hours of computation" * 0.165 * 452 / 170
=> Trabant's kilometers = "Hours of computation" * 0.4387058823529412 **
</code></pre>
<h2 id="all-of-this-must-be-wrong">All of this must be wrong</h2>
<p>Except the durations! Everything else is highly subject to debate. <br/>
Sources are <a href="https://github.com/ochameau/trabant-calc/">here</a>, and contributions or feedback are welcomed.</p>
Interfaces experiments for and from Firefox2016-06-28T11:30:00.000Zhttp://techno-barje.fr/post/2016/06/28/html-experiments<p>What about easily experimenting new interfaces for Firefox?
Written with regular web technologies, served from http, refreshable via a key shortcut.</p>
<h2 id="how-so">How so?</h2>
<p>Follow these 3 steps:</p>
<ul>
<li>Ensure running <a href="https://nightly.mozilla.org/">Firefox Nightly</a>,</li>
<li>Install <a href="http://techno-barje.fr/public/browser_ui-0.1.2-fx.xpi">this addon</a>,</li>
<li>Visit this link: <a href="browserui://rawgit.com/ochameau/planula-browser-advanced/addon-demo/">browserui://rawgit.com/ochameau/planula-browser-advanced/addon-demo/</a>.</li>
</ul>
<p>You should see a page asking you to confirm testing this browser experiment.
Once you click on the install button, the current Firefox interface will be replaced on the fly.</p>
<iframe width="420" height="315" src="https://www.youtube.com/embed/JZemGiSl5LA" frameborder="0" allowfullscreen></iframe>
<p>This interface is a old version of <a href="https://github.com/browserhtml/browserhtml/">Browser.html</a>. But instead of requiring a custom runtime, this is just a regular web site, written with web technologies and fetched from github at every startup.
If you want to check if this is a regular web page, just look at the sources:
view-source:<a href="http://rawgit.com/ochameau/planula-browser-advanced/addon-demo/">http://rawgit.com/ochameau/planula-browser-advanced/addon-demo/</a></p>
<p>If needed you can revert at any time back to default Firefox UI using the "Ctrl + Alt + R" shortcut.</p>
<p>Want to see more interfaces, here are some links:</p>
<ul>
<li><a href="browserui://rawgit.com/ochameau/planula-minimal-browser/master/index.html">browserui://rawgit.com/ochameau/planula-minimal-browser/master/index.html</a> Simplest interface ever. Just one HTML file.</li>
<li><a href="browserui://rawgit.com/ochameau/planula-browser-advanced/addon-demo/index.html?tabsui=sidetabs">browserui://rawgit.com/ochameau/planula-browser-advanced/addon-demo/index.html?tabsui=sidetabs</a> Tabs on the side.</li>
<li>Yours?</li>
</ul>
<h2 id="how-does-it-work-">How does it work ?</h2>
<p>The addon itself is simple. It does 4 things:</p>
<ul>
<li>Install a custom protocol handler for browserui:// in order to redirect to the install page,</li>
<li>The install page then communicate with a privileged script to set the "browser.chromeURL" preference which indicates the url of the top level document,</li>
<li>While we set this preference, we also grant additional permissions to the target url to use the "mozbrowser" property on iframes,</li>
<li>Finnaly, it reload the top level document with the target url.</li>
</ul>
<p>The <iframe mozbrowser> tag, while beeing non-standard, allows an iframe to act similarly to a <xul:browser> or a <webview> tag. It allows to safely open websites within the interface. Webpages loaded inside it also run into a seperated content process (e10s) contrary to regular <iframe> tag.</p>
<h2 id="why">Why?</h2>
<p>Last year, during Whistler All Hands, there was this "Kill XUL" meeting.
<a href="https://public.etherpad-mozilla.org/p/kill-xul-planning">Various options</a> were discussed. But it is unclear that any has been really looked into.
Except may be the electron option, via Tofino project.</p>
<p>Then a thread was posted on <a href="https://mail.mozilla.org/pipermail/firefox-dev/2015-July/003063.html">firefox-dev</a>. At least Go faster and new Test Pilot addons
started using HTML for new self-contained features of Firefox, which is already a great step forward!</p>
<p>But there was no experiment to measure how we could leverage HTML to build browsers within Mozilla.</p>
<p>Myself and <a href="https://github.com/vingtetun/">Vivien</a> started looking into this and ended up releasing this addon.
But we also have some more concrete plan on how to slowly migrate Firefox full of XUL and cryptic XPCOM/jsm/chrome technologies
to a mix of Web standards + Web extensions. We do have a way to make Web extensions to work within these new HTML interfaces.
Actually, it already supports basic features. When you open the browserui:// links, it actually opens an HTML page from a Web extension.</p>
<h2 id="how-to-hack-your-own-thing">How to hack your own thing?</h2>
<p>First, you need to host some html page somewhere.
Any website could be loaded. <a href="browserui://localhost/">browserui://localhost/</a> if you are hosting files locally.
But you may also just load google if you want to <a href="browserui://google.com/">browserui://google.com/</a>.
Just remember the "Ctrl + Alt + R" shortcut to get back to the default Firefox UI!</p>
<p>The easiest is probably to fork <a href="https://github.com/ochameau/planula-minimal-browser/">this one-file minimal browser</a>, or directly the <a href="https://github.com/ochameau/planula-browser-advanced/tree/addon-demo/">demo browser</a>.
Host it somewhere and open the right browserui:// url.
browserui:// just maps one to one to the same URL starting with "http" instead of "browserui".
Given that this addon is just a showcase, we don't support https yet.</p>
<p>Then, change the files, hit "Ctrl + R" and your browser UI is going to be reloaded, fetching resources again from http.</p>
<p>Once you have something you want to share, using github is handy.
If you push files to let say the "mozilla" account and "myui" as the repository name,
then you can share simply via the following link:</p>
<p> browserui://rawgit.com/mozilla/myui/master/</p>
<p>But there is many ways to control which particular version you want to share.
Sharing another branch, like the "demo" branch:</p>
<p> browserui://rawgit.com/mozilla/myui/demo/</p>
<p>Or a particular changeset:</p>
<p> browserui://rawgit.com/mozilla/myui/5a931e3e0046ccde6d4ad3a73e93016bcc3a9650/</p>
<h2 id="contribute">Contribute</h2>
<p>This addon lives on github, over here: <a href="https://github.com/ochameau/browserui">https://github.com/ochameau/browserui</a>.
Feel free to submit Issues or Pull requests!</p>
<h2 id="whats-next">What's next?</h2>
<ul>
<li>Demonstrate WebExtension based browser and the ability to implement Web Extension APIs from an HTML document.</li>
<li>Tweak the platform to handle OS integration better from low privileged HTML document.
Things like popup/panels, transparent windows, native OS controls, menus, ...</li>
<li>Also tune the platform to be able to load existing browser features from HTML like about: pages, view-source:, devtools, ...</li>
</ul>
<p>Actually, we already have various patches to do that and would like to upstream them to Firefox!</p>
Shipping Firefox features as Web Extensions2016-03-16T10:40:00.000Zhttp://techno-barje.fr/post/2016/03/16/shipping-firefox-features-as-addon<p>What about using Web Extension APIs to implement core firefox features?
Here is the opportunity I would like to discuss today.</p>
<p>Not only new features (Hello, Pocket) but also existing built-in features (e.g Session Restore). I <a href="http://blog.techno-barje.fr/post/2016/03/14/session-restore-web-extension/">recently blogged</a> about building it as a web extension.</p>
<p>Session restore is a critical feature of Firefox.
It uses many mozilla-only technologies: XUL, XPCOM, message managers, jsm and so on.
It also involves mostly privileged code whereas it isn't really needed, possibly leading to security issues.
Even if it is living in it's own folder <em>/browser/components/sessionstore/</em>, there are many hardcoded parts of it elsewhere.
It is clearly not self contained.</p>
<p>Instead of just hardcoding this feature into Firefox, we could possibly ship it as an addon.
That would have various benefits:</p>
<ul>
<li>Let a chance to release this part of firefox faster than the platform,</li>
<li>Help us experiment by doing some A/B testing with two very different implementations,</li>
<li>Dogfooding Web Extension APIs would make them more stable and ensure they are both useful and powerful,</li>
<li>It should open ways to reuse these addons once Servo is ready and implements Web Extension APIs,</li>
<li>Last but not least, it dramatically reduces the contribution efforts required to modify a core Firefox feature:<ul>
<li>Forget about building C++ and having a build environment,</li>
<li>You can possibly checkout a small repo instead of all mozilla-central,</li>
<li>Do not necessarily have to use various mozilla specific tools like mach,</li>
<li>No need to even build Firefox itself, instead you could fetch a nightly build and install the addon on it,</li>
<li>And forget about all cryptic technologies that we keep using as ancient relics like xul, xpcom and so on!</li>
</ul>
</li>
</ul>
<p>About contribution. I asked about how many people contribute(d) to session restore.
There is mostly one active employee working on it: mconley.
Then sparse contributions are being made by other employees like ttaubert, yoric, dragana, mystor, mayhemer,...
But there seem to be only one non-employee contribution made by Allasso Travesser with just one patch.</p>
<p>I'm convinved we can engage more with simplier workflows (Addon versus built-in) and technologies with a lower learning curve (Web Extension vs XUL).</p>
Session restore as a Web Extension2016-03-14T07:00:00.000Zhttp://techno-barje.fr/post/2016/03/14/session-restore-web-extension<p>Session Restore is a built-in Firefox feature which preserve user data after a crash or an unexpected close.
I spent a little time exploring if it is possible to build such a feature as an replaceable web extension.</p>
<p>Here is a sketch of session store implemented as a web extension:
<iframe width="420" height="315" src="https://www.youtube.com/embed/58vPBJWmAig" frameborder="0" allowfullscreen></iframe></p>
<p>This addon currently save'n restore:</p>
<ul>
<li>tabs (the url for each tab and the tab order)</li>
<li>form values</li>
<li>scroll positions</li>
</ul>
<p>Missing features (compared to the built-in session restore):</p>
<ul>
<li>Does not restore session storage</li>
<li>Always restore the previous session</li>
<li>I have no idea what it does regarding private browsing</li>
<li>No dedicated about:sessionrestore page</li>
<li>Does not save tab history. Instead it just saves the current tab document/form/scroll</li>
</ul>
<p>To get the above points working, it is a matter of time and possibly some tweaks to the current Web Extensions APIs.</p>
<br/>
Yes. It is possible to implement a core Firefox feature with the [in-progress implementation][3] of Web Extension APIs.
It also shows the limitations of the current Chrome APIs. For example in order to fully support tab history, the APIs may needs to be extended.
<p>Source code is available on <a href="https://github.com/ochameau/session-restore-webext/">github</a>.
A <a href="https://github.com/ochameau/session-restore-webext/releases/download/v0.1-beta/sessionstore.mozilla.org-v0.1-beta.xpi">pre-release version</a> is also available. Don't forget to toggle <code>xpinstall.signatures.required</code> preference to false from about:config to be able to install it.</p>
Debug pure javascript leaks2013-03-09T08:00:00.000Zhttp://techno-barje.fr/post/2013/03/09/js-memory-debugging<p>Mozilla ecosystem already has plenty of <a href="https://wiki.mozilla.org/Performance:Leak_Tools">built-in features, scripts and addons</a> to debug memory usage. But most of them are focused on the internals of the C++ codebase. These tools are very verbose and expose very-very few Javascript metadata. So that you have to start learning tons of internal C++ classes before being able to undersstand that your Javascript objects are actually visible in these tools output!</p>
<p>For now, when chasing Addon SDK memory leaks, I was just looking at overall memory usage and tried to read and re-read our codebase until I finally find the leak by seeing it in the code... But that practice may come to an end!
We should have Javascript-oriented memory debugging tool. With a clear picture of which objects are still allocated at a given point in time. <strong>Without any C++ aspect.</strong> With an output that any confirmed Javascript developer can easily read and understand without knowing much about how Mozilla engine works.</p>
<p>With that in mind, I started looking at <strong>the object CC/GC graph</strong>. This graph contains a view of all objects allocated dynamically by the Garbage collector. All Javascript objects end up being in this graph. But also way more other C++ objects that we will have to translate into a meaningfull Javascript paradigm to the developer.</p>
<p>Then I realized that an XPCOM component already expose the whole CC graph: <a href="https://developer.mozilla.org/en-US/docs/XPCOM_Interface_Reference/nsICycleCollectorListener">nsICycleCollectorListener</a>. But again, with very few Javascript information other than "this is a Javascript object", or, "this is a javascript function". Not much more. It ends up being quite frustrating as most of the information is there, we just miss few pinches of Javascript metadata.
Like:</p>
<ul>
<li>what are the attributes of this object?</li>
<li>in which document it lives?</li>
<li>in which script it has been allocated?</li>
<li>in which line?</li>
<li>in which function?</li>
<li>what other Javascript objects refers this one?</li>
<li>what is the function name/source?</li>
<li>...</li>
</ul>
<p>Finally, because of -or- thanks to the extra motivation given by <a href="http://paul.cx/">padenot</a> and <a href="https://github.com/vingtetun">vingtetun</a>, I ended up doing crazy hacks to fetch this few information directly from Javascript: Call the <a href="https://developer.mozilla.org/en-US/docs/SpiderMonkey/JSAPI_Reference">jsapi</a> library by using jsctypes with the object addresses given by the nsICycleCollectorListener interface. The benefit is that this experiment can run on any firefox release build (i.e. no need for a custom Firefox build). Using only JS also allows to experiment faster by avoiding the compiling phase. But that should definitely be kept as an experiment as I would not consider this as a safe practice!!</p>
<p>The result of this is:</p>
<ul>
<li>a <a href="https://github.com/ochameau/jscpptypes">jsctypes C++ mangling library</a>,</li>
<li>a <a href="/public/demo/cc-js-tool/cc-js-tool.xpi">work-in-progress addon</a> to identify DOM node leaks involving multiple compartments/documents, and,</li>
<li>two (<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=833783">1</a>, <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=839280">2</a>) bug fixes.</li>
</ul>
<p>You can install this <a href="/public/demo/cc-js-tool/cc-js-tool.xpi">addon</a>, it should work on Windows and Linux with FF20+. You can easily see bug 839280's leaks on today's Aurora (FF21) by opening firefox with this addon, then open and close the devtool inspector panel (CTRL+MAJ+I) and finally run the memory script by pressing ALT+SHIFT+D shortcut.
Wait a bit, the addon is processing the whole CC graph and will freeze your firefox instance. And then open a folder with a log file that displays various information about potential cross compartment leaks.</p>
<p>Let me show you addon's output for this leak.
The code involved in this leak is the following button's click listener:
<a href="http://hg.mozilla.org/mozilla-central/annotate/5d7a14c71f51/browser/devtools/shared/DeveloperToolbar.jsm#l102">/browser/devtools/shared/DeveloperToolbar.jsm</a></p>
<pre><code>button.addEventListener("click", function() {
requisition.update(buttonSpec.typed);
//if (requisition.getStatus() == Status.VALID) {
requisition.exec();
/*
}
else {
console.error('incomplete commands not yet supported');
}
*/
}, false);
</code></pre>
<p>The script will print this in the log file:</p>
<pre><code>############################################################################
DOM Listener leak.
>>> Leaked listener ctypes.uint64_t.ptr(ctypes.UInt64("0x128a16c0")) - JS Object (Function)
Function source:
function () {
"use strict";
requisition.update(buttonSpec.typed);
//if (requisition.getStatus() == Status.VALID) {
requisition.exec();
/*
}
else {
console.error('incomplete commands not yet supported');
}
*/
}
>>> DOM Event target holding the listener ctypes.uint64_t.ptr(ctypes.UInt64("0x12a95f60"))
FragmentOrElement (XUL) toolbarbutton id='command-button-responsive' class='command-button' chrome://browser/content/devtools/framework/toolbox.xul
############################################################################
Scope variable leak.
>>> Function keeping 'button' scope variable alive ctypes.uint64_t.ptr(ctypes.UInt64("0xf9a1640")) - JS Object (Function)
Function source:
function () {
"use strict";
requisition.update(buttonSpec.typed);
//if (requisition.getStatus() == Status.VALID) {
requisition.exec();
/*
}
else {
console.error('incomplete commands not yet supported');
}
*/
}
</code></pre>
<p>It immediatly tells you that you <strong>may</strong> leak something via this anonymous function. <strong>may leak</strong>, and not <strong>do leak</strong>, as it is always hard to tell which references are expected to be removed or not, but at least, it tells you that this reference still exist and may keep your compartment/document/global alive.</p>
<p>To make it short, the script first search for FragmentOrElement objects in the CC and search for all objects from the same compartment. Then I focused my work on cross compartment leaks so that I looked for edges going from and to these objects. Finally I analysed each of these objects having references from and to other compartments and tried to translate C++ object patterns into a meaningfull sentence for the Javascript paradigm.</p>
<p>Now What?</p>
<p>I'd like to get feedback from people used to debug leaks (no matter the language) and also discuss with people used to gecko internals like nsXPCWrappedJS, JS Object (Call), ... in order to know if assumptions I made <a href="https://github.com/ochameau/cc-js-tool/blob/master/main.js#L213-L250">here</a> are correct. So that I can continue translating new potential C++ object patterns into meaningfull Javascript usecases.</p>
How to write a new WebAPI in Firefox Desktop, mobile, OS - part 1 ?2013-02-14T08:00:00.000Zhttp://techno-barje.fr/post/2013/02/14/how-to-write-a-webapi<p>Mozilla teams recently wrote <a href="https://wiki.mozilla.org/WebAPI">tons of new API</a>
in a very short period of time, mostly for Firefox OS, but not only.
As Firefox Desktop, Firefox Mobile and Firefox OS are based on the same source
code, some of these API can easily be enabled on Desktop and mobile.</p>
<p>Writing a new API can be seen as both complicated and simple. Depending on the
one you want to write, you don't necessary need to write anything else than
Javascript code (for example the <a href="https://wiki.mozilla.org/WebAPI/SettingsAPI">settings API</a>).
That makes such task much more accessible and easier to prototype as you do
not enter in compile/run development cycles, nor have to build firefox before even trying to experiment. But there is a significant number of
mozilla specific knowledges to have before being able to write your API code.</p>
<p>The aim of this article is to write down a simple API example from ground
and try to explain all necessary things you need to know before writing an API
with the same level of expertise than what did Firefox OS engineers.</p>
<h1 id="the-example-api--commonjs-require">The example API: « CommonJS require »</h1>
<p>Let's say we would like to expose to websites a <code>require()</code> method that act like
the nodejs/commonjs method with the same name. This function allows you to load
javascript files exposing a precise interface, without polluting your current javascript scope.</p>
<p>So given the following javascript file:
{% codeblock <a href="http://blog.techno-barje.fr/public/webapi/module.js">http://blog.techno-barje.fr/public/webapi/module.js</a> lang:js %}
// All properties set to <code>exports</code> variable will be returned to the requirer
exports.hello = function () {
return "World";
};
{% endcodeblock %}</p>
<p>Any webpage will be able to use its <code>hello</code> function like this:
{% codeblock the web lang:js %}
var module = navigator.webapi.require("<a href="http://blog.techno-barje.fr/public/webapi/module.js">http://blog.techno-barje.fr/public/webapi/module.js</a>");
alert(module.hello()); // >> Display "World"
{% endcodeblock %}</p>
<h1 id="simpliest-implementation-possible">Simpliest implementation possible</h1>
<p>In this first example I stripped various advanced features in order to ease
jumping into Firefox internal code. I bundled this example as a Firefox addon
so that you can easily see it running and also hack it.
You can download it <a href="/public/webapi/api-without-idl.xpi">here</a>. Once installed, you will have to relaunch Firefox,
open any webpage, then open a Web console and finally execute the
<code>navigator.webapi.require</code> code I just gave.</p>
<p>Now let's see what's inside.
This .xpi file is just a zip file so you can
open it and see three files:</p>
<ul>
<li><strong>install.rdf</strong>:</li>
</ul>
<p> A really boring file describing our addon. The only two important
fields in this file are: <code><em:bootstrap>false</em:bootstrap></code> and
<code><em:unpack>true</em:unpack></code> required when you need to register a XPCOM file.
More info <a href="https://developer.mozilla.org/en-US/docs/Install_Manifests#bootstrap">here</a>.</p>
<ul>
<li><strong>chrome.manifest</strong>:</li>
</ul>
<pre><code># Those two lines allow to register the Javascript xpcom component defined in
# `web-api.js`
component {20bf1550-64b8-11e2-bcfd-0800200c9a77} web-api.js
contract @mozilla.org/webapi-example;1 {20bf1550-64b8-11e2-bcfd-0800200c9a77}
# That line registers the xpcom component in the "JavaScript-navigator-property"
# category which add it to the list of components that inject a new property in
# `navigator` web pages global object. The second argument defines the name of
# the property we would like to set.
category JavaScript-navigator-property webapi @mozilla.org/webapi-example;1
</code></pre>
<ul>
<li><strong>web-api.js</strong>:</li>
</ul>
<p>And last but not least. The Javascript XPCOM file. XPCOM is a component object
model overused in Mozilla codebase.
More info <a href="https://developer.mozilla.org/en/docs/XPCOM">here</a>
Let's analyse its content by pieces:</p>
<p>{% codeblock lang:js %}
function WebAPI() {}</p>
<p>WebAPI.prototype = {
// Define the XPCOM component id, that has to match the one given in
// chrome.manifest file
classID: Components.ID("{20bf1550-64b8-11e2-bcfd-0800200c9a77}"),</p>
<p> // Mandatory XPCOM method, that defines which interfaces
// an object exposes.
// * nsIDOMGlobalPropertyInitializer:
// <a href="https://developer.mozilla.org/fr/docs/XPCOM_Interface_Reference/nsIDOMGlobalPropertyInitializer">https://developer.mozilla.org/fr/docs/XPCOM_Interface_Reference/nsIDOMGlobalPropertyInitializer</a>
// This interface is related to the XPCOM category "JavaScript-navigator-property"
// declared in the chrome.manifest file and this interface defines the <code>init</code>
// method that is called when a webpage try to access navigator.webapi property.
QueryInterface: XPCOMUtils.generateQI([
Ci.nsIDOMGlobalPropertyInitializer
]),</p>
<p> // nsIDOMGlobalPropertyInitializer:init
init: function init(win) {
// The <code>init</code> method can return an object that will be the one exposed as
// <code>navigator.webapi</code>. This object will be created for each web document.
return {
require: function (url) {
return require(win, url);
},
// Special internal attribute used to define which property the website
// will be able to access. Only attribute whose name is specified here
// are going to be accessible by the page.
// <a href="https://wiki.mozilla.org/XPConnect_Chrome_Object_Wrappers">https://wiki.mozilla.org/XPConnect_Chrome_Object_Wrappers</a>
<strong>exposedProps</strong>: {
require: 'r'
}
};
}
};</p>
<p>// Last XPCOM thingy that allows to expose our WebAPI Component to the system
// More info here: <a href="https://developer.mozilla.org/en-US/docs/Mozilla/JavaScript_code_modules/XPCOMUtils.jsm">https://developer.mozilla.org/en-US/docs/Mozilla/JavaScript_code_modules/XPCOMUtils.jsm</a>
const NSGetFactory = XPCOMUtils.generateNSGetFactory([WebAPI]);
{% endcodeblock %}</p>
<p>I let you discover the implementation of the <code>require</code> method, but it will
be your job to implement such method. Now, you have the very minimal set where
you can tweak the returned value of the <code>init</code> method and expose your own
API to webpages.</p>
<p>Now note that this is a very minimal example. I'll try to continue blogging
about that and eventually talk about:</p>
<ul>
<li>interfaces definition,</li>
<li>custom event implementation,</li>
<li>other XPCOM categories (in order to inject on other object than navigator),</li>
<li>how to implement a cross process API (mandatory for Firefox OS),</li>
<li>prototyping via the Addon SDK,</li>
<li>...</li>
</ul>
Firefox OS Bootstrap: How to Build It on a VM2012-10-27T07:00:00.000Zhttp://techno-barje.fr/post/2012/10/27/firefox-os-bootstrap<p>During my on-boarding on Firefox OS team I kept a draft of all necessary stuff that need to be done in order to build the project and flash it to the phone.
I'm pretty sure it can help people on-boarding the project by having a single page that would allow anyone to start working on Firefox OS. I highly suggest you to take a look at <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Firefox_OS">MDN Firefox OS documentation</a> if you visit this page later on, as this blogpost will most likely be outdated in some weeks.</p>
<h1 id="environnement">Environnement</h1>
<h2 id="use-a-virtual-machine">Use a Virtual Machine</h2>
<p>I'm suggesting everyone to use a VM. It allows you to use exactly same environment, in order to maximize your chances to succeed building Firefox OS!
Using another OS, another linux distro or even another Ubuntu version will introduce differences in dependencies versions and can easily give you errors no one but you are facing :(</p>
<p>You can use VMware Player which is free and available <a href="https://my.vmware.com/web/vmware/free#desktop_end_user_computing/vmware_player/4_0">here</a>, or any other VM software you are confortable with that has decent USB support (required to flash the phone).</p>
<h2 id="use-ubuntu-1110">Use Ubuntu 11.10</h2>
<p>For the same reason than the VM, I suggest you to use the <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Boot_to_Gecko/B2G_build_prerequisites#Requirements_for_Linux">recommended linux distro and version</a>
You can download this <a href="http://releases.ubuntu.com/oneiric/ubuntu-11.10-desktop-amd64.iso">Ubuntu 11.10 x64 ISO image</a> and create a VM out of it (It is super easy with VMware, it almost does everything for you). The only important things are to set a large enough virtual drive, 30GB is a safe minimum, and enough memory, 4GB is a safe minimum.</p>
<p>Now open a terminal and launch all following commands in order to install all necessary dependencies.</p>
<h1 id="install-dependencies">Install dependencies</h1>
<h2 id="install-build-dependencies">Install build dependencies:</h2>
<pre><code>sudo apt-get install build-essential bison flex lib32ncurses5-dev lib32z1-dev lib32z1-dev ia32-libs libx11-dev libgl1-mesa-dev gawk make curl bzip2 g++-multilib libc6-dev-i386 autoconf2.13 ccache git
sudo apt-get build-dep firefox
</code></pre>
<h2 id="java-jdk-6-needed-for-adb">Java JDK 6 needed for adb</h2>
<pre><code># The following PPA allows you to easily install the JDK through apt-get
sudo add-apt-repository ppa:ferramroberto/java
sudo apt-get update
sudo apt-get install sun-java6-jdk
</code></pre>
<h2 id="android-sdk-in-order-install-adb">Android SDK in order install adb</h2>
<pre><code># Your first need to install 32 bit libs as we are using 64bit OS
# otherwise, you will have following error while running adb:
# $ adb: No such file or directory
sudo apt-get install ia32-libs
# There is no particular reason to use this SDK version
# It was the current version when I've installed it
wget http://dl.google.com/android/android-sdk_r20.0.3-linux.tgz
tar zxvf android-sdk_r20.0.3-linux.tgz
cd android-sdk-linux/
# The following command installs only "platform-tools" package which
# contains adb and fastboot
./tools/android update sdk --no-ui --filter 1,platform-tool
# Register adb in your PATH
echo "PATH=`pwd`/platform-tools:\$PATH" >> ~/.bashrc
# Execute in a new bash instance in order to gain from this new PATH
bash
</code></pre>
<h2 id="tweak-udev-in-order-to-recognize-your-phone">Tweak udev in order to recognize your phone</h2>
<p>If you do not do that at all, or not properly, <code>$ adb devices</code> will print this:</p>
<pre><code>???????????? no permissions
</code></pre>
<p>You need to put the following content into <code>/etc/udev/rules.d/51-android.rules</code></p>
<pre><code>cat <<EOF | sudo tee -a /etc/udev/rules.d/51-android.rules
SUBSYSTEM=="usb", ATTRS{idVendor}=="19d2", MODE="0666"
SUBSYSTEM=="usb", ATTRS{idVendor}=="18d1", MODE="0666"
EOF
sudo restart udev
</code></pre>
<p>Here I register only internal Mozilla phones otoro and unagi IDs.
You may want to add lines for other phones. See <a href="http://developer.android.com/tools/device.html#VendorIds">this webpage</a> for other vendor IDs.</p>
<h1 id="checkout-all-necessary-projects">Checkout all necessary projects</h1>
<h2 id="checkout-b2g-repository">Checkout B2G repository</h2>
<pre><code>git clone https://github.com/mozilla-b2g/B2G.git
</code></pre>
<p>Take a minute to configure git, otherwise next steps will keep bugging you asking for your name and email.</p>
<pre><code>cat > ~/.gitconfig <<EOF
[user]
name = My name
email = me@mail.com
[color]
ui = auto
EOF
</code></pre>
<h2 id="connect-your-phone-and-ensure-it-is-visible-from-your-vm">Connect your phone and ensure it is visible from your VM.</h2>
<p>In order to do so run <code>adb devices</code>, you should see non-empty list of devices.</p>
<pre><code>$ adb devices
List of devices attached
full_unagi device
</code></pre>
<p>If you see <code>no permissions</code> message, checkout udev step.<br />
Note that you have to setup your Virtual machine software to connect the USB port to the VM. In VMware player, click on: <code>Player menu > Removable devices > "...something..." Android > Connect (Disconnnect from host)</code>.<br /></p>
<h2 id="checkout-all-dependencies-necessary-for-your-particular-phone">Checkout all dependencies necessary for your particular phone</h2>
<p>Before running the following command, ensure that your phone is connected.
Note that you have to run this command with <strong>your phone still being on Android OS and ICS version</strong>. If your phone is already on B2G, you will have to retrieve the backup-otoro or backup-unagi folder automatically created when running the following command.<br/>
If your device is on an Android version older than ICS, you will have to flash it first to ICS. For both of these issues, ask in #b2g for help.
This step will take a while, as it will download tons of big projects: android, gong, kernel, mozilla-central, gaia,... More than 4GB of git repositories, so be patient.</p>
<pre><code>cd B2G/
# Run ./config --help for the list of supported phones.
./config.sh unagi
</code></pre>
<h2 id="install-qualcomm-areno-graphic-driver">Install Qualcomm Areno graphic driver</h2>
<p>Only if you are aiming to build Firefox OS for otoro or unagi phones,
you will have to manually download Qualcomm areno armv7 graphic driver, available <a href="https://developer.qualcomm.com/file/10127">here</a>. <br/>
Unfortunately, you will have to register to this website in order to be able to download this file. Once downloaded, put this <code>Adreno200-AU_LINUX_ANDROID_ICS_CHOCO_CS.04.00.03.06.001.zip</code> into your <code>B2G</code> directory.</p>
<h1 id="build-firefox-os">Build Firefox OS</h1>
<p>If ./config.sh went fine, you can now build Firefox OS!</p>
<pre><code>./build.sh
</code></pre>
<p>Here is the possible error you might see:</p>
<ul>
<li><p><code>arm-linux-androideabi-g++: Internal error: Killed (program cc1plus)</code></p>
<p>You are most likely lacking of memory. 4GB is a safe minimum.</p>
</li>
<li><p><code>KeyedVector.h:193:31: error: indexOfKey was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [-fpermissive]</code></p>
<p>Your gcc version is too recent. Try using gcc 4.6.x version.</p>
</li>
</ul>
<h1 id="flash-the-phone">Flash the phone</h1>
<p>If ./build.sh went fine, you can now flash your phone:</p>
<pre><code> ./flash.sh
</code></pre>
<p>Note that I have to unplug replug the device in order to make it work in the VM.
When running ./flash.sh, the unagi phone switch to a blue screen, then ./flash.sh script is stuck on <code>< waiting device ></code> message. If I unplug and plug in back, it immediately starts flashing. Be carefull if you have to do the same, ensure that ./flash.sh doesn't start flashing when you unplug it! <br/><br/>
If ./flash.sh failed by saying that the image is too large, It might mean that you have to root your phone first. Again, ask in #b2g for help.</p>
Addon SDK 1.11 - the page-mod release2012-09-19T07:00:00.000Zhttp://techno-barje.fr/post/2012/09/19/1.11-page-mod-release<p><a href="https://addons.mozilla.org/en-US/developers/docs/sdk/latest/packages/addon-kit/page-mod.html">page-mod API</a> is the most commonly used API in jetpack. It allows to execute Javascript piece of code against any given website. It is very similar to greasemonkey and userscripts.</p>
<p>In Addon SDK version 1.11, which is due for October, 30th, we will bring various subtle but very important fixes, features and improvements to this API. In the meantime we will start releasing beta versions on tuesday (09/25) with 1.11b1.</p>
<p>Here is an overview of these changes:</p>
<ol>
<li><p>You will now be able to execute page-mod scripts to already opened tab, by using the new <code>attachTo</code> option.
[<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=708190">bug 708190</a>]</p>
</li>
<li><p>With the same <code>attachTo</code> option, you can execute page-mod scripts only on top-level tab documents, and so avoid being applied to iframes.
The <a href="https://blog.mozilla.org/addons/2012/09/12/introducing-page-mods-attachto/">following blogpost</a> goes into detail about this new option.
[<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=684047">bug 684047</a>]</p>
</li>
<li><p>page-mod now ignores non-tab documents like: panel, widget, sidebar, hidden document living in firefox's hidden window, ...
[<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=777632">bug 777632</a>]</p>
</li>
<li><p>Your addon will be more efficient as we removed some costly workaround: the Javascript proxies layer between your content script and the page. We are now relying directly on C++ wrappers, also known as Xraywrappers. We are expecting a major improvement in term of memory and CPU usage. As this change depends on modifications made in Firefox, it will only be enabled on Firefox 17 and greater.
[<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=786976">bug 786976</a>]</p>
</li>
<li><p>Content scripts are now correctly frozen when you go back and forth in tab history. Before that, your content script was still alive and could throw some unexpected exception or modify an unexpected document.
[<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=766088">bug 766088</a>]</p>
</li>
<li><p>Random fixes: window.top and window.parent will be correct for iframes [<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=784431">bug 784431</a>].</p>
</li>
<li><p>Last but not least and still at risk for 1.11 release. You will be able to extend priviledges of your content script to extra domains. So that your script will now be able to execute some action on your own domain in addition to the current page domain, without facing cross domain limitations. This rely on some improvements being made to Firefox and will only be enabled on Firefox 17+.
[<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=786681">bug 786681</a>]</p>
</li>
</ol>
<p>It is realy exciting to see our most used API receiving so many improvements and I hope that we fixed most of the long-living issues you may have faced with page-mod!!</p>
<p>We would really like to get your feedback on these changes. If you find anything wrong, please, file bugs <a href="https://bugzilla.mozilla.org/enter_bug.cgi?product=Add-on%20SDK">here</a> and do not hesitate to come discuss with our team in the <a href="https://groups.google.com/forum/?fromgroups#!forum/mozilla-labs-jetpack">mailing-list</a> </p>
Jetpack localization using YAML format2011-11-17T08:00:00.000Zhttp://techno-barje.fr/post/2011/11/17/jetpack-localization-yaml<p>In <a href="/post/2011/10/31/jetpack-localization/">a previous post</a>, I've described my first proposal for localization support in jetpack addons. I've decided to change locale files format for <a href="http://en.wikipedia.org/wiki/YAML">YAML</a> instead of JSON. During MozCamp event, folks helped me identifying some pitfalls with JSON:</p>
<ul>
<li><strong>No multiline string support.</strong> Firefox parser allows multiline but it is not officialy supported! So that it will disallow third party tools to work properly.</li>
<li><strong>No easy way to add comments.</strong> It is mandatory for localizers to have context description in comments next to keys to translate. As there is no way to add comments in JSON, it will end up complexifying a lot locale format.</li>
</ul>
<h1 id="example">Example</h1>
<pre><code class="language-yaml">
# You can add comments with `#`...
Hello %s: Bonjour %s # almost ...
hello_key: Bonjour %s # wherever you want!
# For multiline, you need to indent your string with spaces
multiline:
"Bonjour
%s"
# Plural forms.
# we use a nested object with attributes that depends on the target language
# in english, we only have 'one' (for 1) and 'other' (for everything but 1)
# in french, it is the same except that 'one' match 0 and 1
# in some language like Polish, there is 4 forms and 6 in arabic
#
# So that having a structured format like YAML,
# help us writing these translations!
pluralString:
one: "%s telechargement"
other: "%s telechargements"
# I need to enclode these strings with `"` because of %. See note after.
</code></pre>
<pre><code class="language-javascript">
// Get a reference to `_` gettext method with:
const _ = require("l10n").get;
// These three forms end up returning the same string.
// We can still use a locale string in code, or use a key.
// And multiline string gets its `\n` removed. (there is a way to keep them)
_("Hello %s", "alex") == _("hello_key", "alex") == _("multiline", "alex")
// Example of non-naive l10n feature, plurals:
_("pluralString", 0) == "0 telechargement"
_("pluralString", 1) == "1 telechargement"
_("pluralString", 10) == "10 telechargements"
</code></pre>
<h1 id="advantages-of-yaml">Advantages of YAML</h1>
<ul>
<li><strong>Multiline strings are supported nicely / easy to read.</strong> You do not need to add a final <code>\</code> on all lines. As mulitiline is easier, localizers can use them more often and it will surely improve readability of locale files!</li>
<li><strong>Structured data format.</strong> we can use this power whenever it is needed. For example, when we need to implement complex l10n features like plural forms or any feature that goes beyond simple 1-1 localization. The cool thing if we compare to JSON is that even if we define structures, we keep a really simple format with no noise (like {, }, ", ...).</li>
</ul>
<br/>
<p>As nothing comes without any issues, here is what I've found around YAML:</p>
<ul>
<li>This format is not a Web standard. I don't think it makes much sense to avoid using it because of that. We are clearly missing a standardized format for localization in the web world.</li>
<li>You may hit some issues when you do not enclose your strings with <code>"</code> or <code>'</code>. For example, you can't start a string with <code>%</code>, nor having a <code>:</code> in middle of your string without enclosing it.</li>
<li>Even if YAML is not a web standard, it has been formaly specified. And unfortunately, a handy feature becomes a pitfall for our purpose! Some strings are automatically converted. <code>Yes</code>, <code>True</code>, <code>False</code>, ... are automatically converted to a boolean value. We can work around this in multiple ways, either by documenting it, or modifying the parser. The same solution apply here, you need to enclose your string with quotes.</li>
</ul>
<p><br/><br/></p>
<p>Again, feedback is welcomed on <a href="https://groups.google.com/group/mozilla-labs-jetpack/t/da50c6dac33b445b">this group thread</a> and you can follow this work in <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=691782">bug 691782</a>.</p>
Jetpack localization2011-10-31T07:00:00.000Zhttp://techno-barje.fr/post/2011/10/31/jetpack-localization<p>I'm going to describe the first proposal of localization support for Jetpack.
This approach uses gettext pattern and json files for locales.
It is the first step of multiple iterations. This one only allows retrieving localized string in javascript code.
We are going to give ways to translate files, mainly HTML files, through another iteration.
And we are about to offer an online tool to ease addon localization (like babelzilla website).</p>
<p>Let's start by looking at a concrete example, then I'll justify our different choices.</p>
<pre><code class="language-javascript">{
"Hello %s": "Bonjour %s",
"hello_user": "Bonjour %s"
}
</code></pre>
<pre><code class="language-javascript">// Retrieve a dynamic reference to `_` gettext method with:
const _ = require("l10n").get;
// Then print to the console a localized string:
console.log(_("Hello %s", "alex"));
// => Prints "Bonjour alex" in french.
// Or, if we don't want to use localized string in addon code:
console.log(_("hello_user", "alex"));
</code></pre>
<h2 id="why-gettext">Why gettext?</h2>
<ol>
<li>It gives a way to automatically fetch localizable strings or ids from source code
by searching for <code>_( )</code> pattern. </li>
<li>It allows to use either strings or IDs as value to translate.
It is obviously better to use IDs. Because locales will broke
each time addon developer fix a typo in the main language hard coded in the code.</li>
</ol>
<p>But we should not forget that the high level APIs is trying to
simplify addon development. So that it has to be really easy to translate a simple
addon that has only 2 JS files and less than 50 lines of code!
And the simple fact to mandatory require a locale file for the default language
appears like a big burden for such small addon.</p>
<p>Having said that, I'm really happy that gettext approach doesn't discourage, nor
makes it harder to use IDs, and so, if an addon developer build a big addon
or just want to take more time to use better pratice, he still can do it, easily!</p>
<h2 id="why-json-for-locales">Why JSON for locales?</h2>
<p>We could have used properties files, like XUL addons. But this format has some
limitations that are not compatible with gettext pattern. Keys can't contain spaces
and are limited to ASCII or something alike, so that we can't put text in a key.</p>
<p>So instead of using yet another specific format, I'm suggesting here to use JSON.
JSON is really easy to parse and generate from both client and server side,
and I'm convinced that it is simple enough to be edited with a text editor.
On top of that we can build a small web application to ease localization.</p>
<p>In my very first proposal, I used a complex JSON object with nested attributes.
But it ends up complexifying the whole story without real advantage.
So I'm suggesting now to use the most simple JSON file we can require:
one big object with keys being strings or id to translate and values being translated strings.
Then we will be able to use JSON features to implement complex localization features,
like plurals handling. So that values may be an array of plurals forms.</p>
<h2 id="the-big-picture">The big picture</h2>
<p>Everything starts with one addon developer or one of its contributor.
If one of them want to make the addon localizable, they have to use this new localization module.</p>
<pre><code class="language-js">const _ = require("l10n").get;
</code></pre>
<p>There is already multiple choices that has been made here:</p>
<ul>
<li><code>_</code> is not a <em>magic global</em>. We need to explicitely require it.
This choice will simplify compatibility with other CommonJS environnements, like NodeJS.</li>
<li>The name of the module itself is <code>l10n</code> instead of <code>localization</code> in order to ease the use of it.</li>
<li>This module expose <code>_</code> function on <code>get</code> attribute in order to be able to
expose another methods. I'm quite confident we will need some functions for plurals or files localization.</li>
</ul>
<p>Then, they need to use <code>_</code> on localizable strings:</p>
<pre><code class="language-js">var cm = require("context-menu");
cm.Item({
label: _("My Menu Item"),
context: cm.URLContext("*.mozilla.org")
});
</code></pre>
<p>Now, they have two choices:</p>
<ul>
<li>use a string written in their prefered language, like here.
So that they don't have to create a locale file.</li>
<li>use an ID. Instead of <code>_("My Menu Item")</code>, we will use: <code>_("contextMenuLabel")</code>.
But it forces to create a localization file in order to map <code>contextMenuLabel</code> to <code>My Menu Item</code>.</li>
</ul>
<p>Then, either a developer or a localizer can generate or modify locales files.
Each jetpack package can have its own <code>locale</code> folder.
This folder contains one JSON file per supported language.
Here is how looks like a jetpack addon:</p>
<pre><code>* my-addon/
* package.json # manifest file with addon name, description, version, ...
* data/ # folder for all static files
* images,
* html files,
* ...
* lib / # folder that contains all JS modules:
* main.js # main module to execute on startup
* my-module.js # custom module that may use localization module
* ...
* locale/ # our main interest!
* en-US.json
* fr-FR.json
* en-GB.json
* ...
</code></pre>
<p>The next iteration will add a new feature to our command line tool,
that is going to generate or update a locale file for a given language by fetching localization strings from source code.
For example, the following command will generate <code>my-addon/locale/fr-FR.json</code> file:</p>
<pre><code class="language-sh">$ cfx fetch-locales fr-FR
</code></pre>
<pre><code class="language-javascript">{
"My Menu Item": "My Menu Item"
}
</code></pre>
<p>Finally, we need to replace right side values with the localized strings:</p>
<pre><code class="language-javascript">{
"My Menu Item": "Mon menu"
}
</code></pre>
<p>And build the final addon XPI file with:</p>
<pre><code class="language-sh"> $ cfx xpi
</code></pre>
<p>Any kind of feedback would be highly appreciated on <a href="https://groups.google.com/group/mozilla-labs-jetpack/t/da50c6dac33b445b">this group thread</a>.</p>
<p>If you want to follow this work,
subscribe to <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=691782">bug 691782</a>.</p>