Mozilla ThunderbirdThunderbird 38 Released

Thunderbird 38 is now released (actual initial version is 38.0.1 to maintain compatibility with equivalent Firefox releases). This release has some significant new features, as well as many, many bug fixes. Some of the new features include:

  • Calendaring is now shipped by default. This continues to be implemented as the Lightning extension, but that is now enabled and installed by default.
  • Chat now supports Yahoo Messenger.
  • Messages can be filtered when sent and when archived.
  • You can now search multiple address books.
  • Gmail users can now authenticate using Google’s preferred OAuth2 authentication (which means that new GMail users should work with Thunderbird without special configuration).

This is a significant milestone for the Thunderbird team, as it is the first release that has been fully managed by our volunteer team rather than by Mozilla staff.

Mozilla is still heavily involved with this release, as we still use Mozilla infrastructure for the build and release process. Thanks to the many Mozilla staff who helped out to fix issues!

Thanks to all of the volunteers who have contributed to make this release possible!

Kim MoirMozilla pushes - May 2015

Here's May 2015's monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.


Trends

The number of pushes decreased from those recorded in the previous month (8894) with a total of 8363. 

 
Highlights
  • 8363 pushes
  • 270 pushes/day (average)
  • Highest number of pushes/day: 445 pushes on May 21, 2015
  • 16.03 pushes/hour (highest average)

General Remarks
  • Try has around 62% of all the pushes now
  • The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 27% of all the pushes.

Records
  • August 2014 was the month with most pushes (13090  pushes)
  • August 2014 had the highest pushes/day average with 422 pushes/day
  • July 2014 had the highest average of "pushes-per-hour" with 23.51 pushes/hour
  • October 8, 2014 had the highest number of pushes in one day with 715 pushes 



Mozilla Release Management TeamFirefox 39 beta4 to beta5

As expected, a bit bigger than beta 4. In this release, we disabled async plugin init which was an important source of crashes. We also took a change to improve the security on sync.

  • 28 changesets
  • 44 files changed
  • 550 insertions
  • 118 deletions

ExtensionOccurrences
js18
cpp12
java4
xul2
ini2
xml1
py1
jsm1
in1
html1
h1

ModuleOccurrences
toolkit9
js8
browser8
mobile6
dom6
xpcom1
testing1
mozglue1
modules1
gfx1
docshell1
accessible1

List of changesets:

Jan de MooijBug 1143679 - Make TryNoteIterIon behave more like Baseline/interpreter iterators. r=shu, a=sledru - c5327254125d
Gerald SquelartBug 1160908 - [EME] Delete GMPRecords that are 0 bytes in size. r=cpearce, a=lizzard - d787fb3b86be
Karl TomlinsonBug 1162364 - Detect and abort MF_E_TRANSFORM_STREAM_CHANGE infinite loops. r=cpearce, a=lizzard - 35fad3eed522
Panos AstithasBug 1170495 - Let the debugger server started by GCLI debug chrome code. r=ochameau, r=jwalker, a=lizzard - ffc4a84bbcdf
Richard NewmanBug 1061273 - Pre: add 20+ to Versions. a=lizzard - d999d29c8062
Richard NewmanBug 1061273 - Part 1: Put DEFAULT_PROTOCOLS and DEFAULT_CIPHER_SUITES in GlobalConstants. r=nalexander, a=lizzard - d9943a662657
Richard NewmanBug 1061273 - Part 2: Use per-version DEFAULT_PROTOCOLS and DEFAULT_CIPHER_SUITES. r=nalexander, a=lizzard - 676ae5e1e869
Richard NewmanBug 1061273 - Part 3: Enable TLSv1 for landing. r=nalexander, a=lizzard - c2f209debd59
Richard NewmanBug 1061273 - Part 4: Use SSLSocketFactory directly, rather than our own TLSSocketFactory subclass. r=nalexander, a=lizzard - b36c17437332
Richard NewmanBug 1061273 - Follow-up: Fix cipher suites for Sync 1.1. r=nalexander, a=lizzard - dd38d3ccbacd
Jason OrendorffBug 1155208 - In ToPropertyDescriptor, update attributes immediately after storing a getter or setter in a descriptor. r=jandem, a=abillings - 8de5d18494ab
Tooru FujisawaBug 1171777 - Make sure the index is nonnegative in GetElement IC with hole. r=jandem, a=lizzard - c569e4e29be6
Tooru FujisawaBug 1150297 - Move source property to RegExp instance again. r=till, a=lizzard - 6fe433fed5fb
Justin DolskeBug 1168481 - Add en-* translations to Pocket. r=adw, a=lizzard - f137fedd1455
Ryan VanderMeulenBacked out changeset 6fe433fed5fb (Bug 1150297) for suspicion of causing widespread test failures. - af8d7ef03660
Geoff BrownBug 1171303 - Set pref extensions.autoupdate.enabled to false to avoid testOfflinePage failures. r=mfinkle, a=test-only - 60de9fc9408f
Wes KocherBug 845134 - Disable test_focusedChild.html on Windows 8. a=test-only - a8acd56b1734
Robert StrongBug 1171792 - An update is attempted from an update notification when it is not possible to update. r=spohl, a=lhenry - 038ab776f75f
Florian QuèzeBug 1154019 - Intermittent browser_devices_get_user_media.js | Test timed out | Found a Browser:WebRTCGlobalIndicator. r=Gijs, a=test-only - ab9bfc1f6f00
Randall BarkerBug 1159830 - Autophone - webappstartup should not use console.log to output WEBAPP STARTUP COMPLETE. r=snorp, r=froydnj, a=lizzard - cadc2cc10081
David MajorBug 763395 - Add nlsp.dll to blocklist. r=bsmedberg, a=lizzard - 90c96749f1a3
Paolo AmadiniBug 1158133 - Add a way to disable async stacks, and disable by default on mobile platforms. r=bent, r=jimb, a=lizzard - bfba880deefd
Tim TaubertBug 1172137 - Record outerWindowID for new tabs first when we have a docShell. r=mconley, a=lizzard - 5ac2c3ff9a6d
Tim TaubertBug 1172137 - Update outerWindowID after a browser's remoteness changed. r=mconley, a=lizzard - 73d2a857261f
Tim TaubertBug 1172137 - Update outerWindowID mappings when swapping docShells. r=mconley, a=lizzard - 71ce4845c6f6
Aaron KlotzBug 1170676 - Fix null dereference in PluginModuleParent::StreamCast. r=jimm, a=lizzard - b41419fa52bb
Milan SreckovicBug 1145143 - Check if buffer size is correct before calling the update. r=nical, a=lizzard - ad1f0e1ddaf2
Aaron KlotzBug 1173182 - Disable async plugin init for this release. r=vladan, a=lizzard - 534a78bbabb4

Mozilla Reps CommunityReps Weekly Call – June 11th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

teach-mozilla

Summary

  • Featured Events.
  • EU Code Week / Africa Code Week.
  • Bangladesh Community Meetup.
  • What’s going on at community forums.

AirMozilla video

Detailed notes

Shoutouts to @emma_irwin, @ThePhoenixBird and @gautha91, who have just left their one year Council term, Bangladesh community for creating all the plans to support the launch of the Webmaker app in Bangladesh and @mkohler for all his awesome work and help in bugzilla.

Featured events

These are some of the events that took place last week.

  • DevTalk Bucharest, Bucharest, Romania. 11th
  • FSA Leaders Camp, Mataasnakahoy, Philippines. 12th, 13th
  • Webmaker meetup Bucaramanga, Bucaramanga, Colombia. 12th-14th.
  • Mombassa Webmaker Party, Coast, Kenya. 13th.

Don’t forget to add your event to Discourse, and share some photos, so it can be shared on Reps Twitter account

EU Code Week / Africa Code Week

We were invited to either organize an event during the EU Code week and Africa Code week or join an event that has been organized by other people. To add your event to our Coding Map of Europe please visit the events page. You can use #codeEU Toolkit for organizers and a list of resources to help get started.

If you need help or have a question you can get in touch with EU Code Week Ambassadors in your country.

Bangladesh Community meetup

Core members of the Bangladesh community got together for the first time and worked on planning the launch of Webmaker in Bangladesh: very ambitious goal: 10000 downloads!

They also started thinking about the projects that will come after Webmaker: a push for Fennec (Firefox for Android). Most smartphone users in Bangladesh are on Android so there’s a great opportunity for Mozilla there.

What’s going on at community forums?

There are some interesting conversations that are currently going on at the community forums you might want to check out:

Also as a reminder, the Participation team has its own public category where the team communicates their daily work.

Raw etherpad notes.

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Mozilla Release Management TeamFirefox 39 beta3 to beta4

Beta 4 contains a small number of patches (beta 5 will have more). A beta release mostly focused on graphic issues.

  • 19 changesets
  • 38 files changed
  • 657 insertions
  • 301 deletions

ExtensionOccurrences
js16
cpp8
h4
jsm3
ini3
css2
xml1
html1

ModuleOccurrences
browser13
toolkit10
gfx5
media4
testing2
mobile1
js1
embedding1
dom1

List of changesets:

Edwin FloresBug 1160445 - Add detailed logging for EME promise failures. r=cpearce, r=bholley, a=lizzard - 263f9318751a
Chris PearceBug 1160101 - Revert browser.eme.ui.enabled pref change from Bug 1160101. r/a=backout - 51f5d060b146
Terrence ColeBug 1170665 - Disable the windows segfault popup in the shell. r=jandem, a=NPOTB - f5030585d5c0
Mike ConnorBug 1171730 - Funnelcake builds should use geo-specific defaults. r=florian, a=sledru - e25fcbbd93a4
James GrahamBug 1171916 - Disable another unstable navigation-timing test on osx. a=test-only - b009c272abac
Erik VoldBug 1142734 - Use Timer.jsm and add some logs to jetpack-addon-harness.js. r=mossop, a=test-only - b2455e4eca11
Gijs KruitboschBug 1166066 - Fix opening new windows from a private window. r=jdm, a=lizzard - a7f385942c76
Matt WoodrowBug 1153123 - Don't upload in the ImageBridge thread if A8 texture sharing is broken. r=Bas, a=lizzard - d744ad902c75
David MajorBug 1167189 - Use a size annotation on the OOM abort. r=bholley, a=lizzard - 561e0bdf9614
David AndersonBug 1170211 - Fix a startup crash when attempting to test D3D11 texture sharing. r=jmuizelaar, a=lizzard - ec9c793f24ad
Mark HammondBug 1170079 - Don't treat an old readinglist last-sync-date as a prolonged error if it's disabled. r=adw, a=lizzard - 9dd33c2b4304
Richard NewmanBug 1170819 - Enable payments in Fennec release channel. r=mfinkle, r=AndyM, a=lizzard - a2c9c4c49319
Gijs KruitboschBug 1172270 - backed out changeset b38b8126e4d1 (Bug 1160775), a=backout/relman - 62f75a6439dd
Richard MartiBug 1169981 - Add win10 media query to listitem and treechildren. r=dao, a=lizzard - 0c1d5e2461d4
Marco BonardoBug 1167915 - "Add a Keyword for this Search" does not work anymore on POST forms. r=ttaubert, a=lizzard - d810a18a0e0f
Mark HammondBug 1170926 - Have the hamburger menu notice the 'needs reauthentication' state. r=adw, a=lizzard - 7cee52e60929
Jeff MuizelaarBug 1171094 - Disallow D3D11 ANGLE with old DisplayLink drivers. r=Bas, a=lizzard - da14f82d9caf
Randell JesupBug 1132318 - Merge SelectSendFrameRate with SelectSendResolution. r=bwc, a=abillings - 48c9f45a00f2
Matt WoodrowBug 1170143 - Disable texture sharing if we've blacklisted direct2d. r=Bas, a=lizzard - 4241def0561b

Karl DubostJust Patch This! (or 101 for patching Firefox)

Contributing to Mozilla projects can be done in many ways. This week, a Webcompat bug about behavior differences in between browsers led me to propose a patch on Firefox and get it accepted. I learned a couple of things along the way. That might be useful for others.

The Webcompat Bug - Search Placeholder On Mobile Sites

It started with a difference of behavior in between Chrome and Firefox on Android devices for Nikkei Web site reported by kudodo from Mozilla Japan. I looked at the bug and checked the differences in Firefox 40 (Gecko 40) and Opera 30 (Blink/Chrome 43).

nikkei site in Gecko and Blink

Indeed the placeholder text appears in Gecko and not in Blink.

<form id="searchBlank" action="" method="get">
    <p><input id="searchBox" placeholder="検索" name="" type="search"></p>
</form>

What was happening? I started the inspectors in Gecko and in Blink looking at the CSS properties used in both cases. The CSS didn't have any WebKit differences (for once).

On the Opera/Blink inspector, I noticed in the inherited properties from the user agent stylesheet (aka the default style that any Web page inherits from browsers).

Blink inspector

Gecko didn't have the box-sizing: border-box; for input[type=search] in its default CSS.

input[type=search] {box-sizing: border-box;}

I tested in the inspector adding it to the stylesheet of the Web site, and noticed it was indeed fixing the issue in Gecko without creating an issue in Blink. Good! We need to contact the Web site so they can modify their CSS.

Gecko/Blink Default Rendering Differences

That said, fixing this particular Web site will not fix all the other sites on the Web which might exhibit the same issue. We also need to advocate for the same default rendering in Gecko (Firefox), Blink (Chrome, Opera, UCWeb, …), WebKit (Safari), Spartan (IE) and Edge (new MS browser). This led me to two questions:

  1. What is the most common behavior across browsers?
  2. Do Mozilla have a bug in bugzilla about this?

After testing, I found out that box-sizing: border-box; was the most common behavior for input[type=search] and that there was an open bug about input[type=search] rendering.

Patching Firefox User-Agent StyleSheet

I added a comment on the bug that it was creating a Web compatibility issue. Then I went on a journey to find where the user-agent stylesheet was in the source code repository of Mozilla (Quick Tip: You may want to search old bugs related to the same area in bugzilla, then you might find where you need to look at in the source code with previous patches). So I finally found that the user-agent stylesheet was managed in layout/style/ and that the file forms.css seemed to be pretty close from what I needed. I proposed a patch which was fixing the issue but was not perfect. I made mistakes and it's ok. Boris Zbarski and David Baron helped me to get it straight.

  • The forms.css file had trailing white-spaces. My first patched fixed both the spaces issue and added the missing property. Boris invited my to make two separate patches.
  • My first patch didn't contain a summary (aka commit message). I had done in the past patches with Mercurial for the UA override but I didn't know you could add a message at this stage. So David invited me to add proper messages on the patches.
  • One another mistake I did was to use "Removing" instead of "Remove". David said: We usually write commit messages with verbs rather than nouns, so "Remove trailing spaces".

Creating A Patch On Mozilla Central

I'm assuming here that

  1. you have already a copy of mozilla-central on your computer. See Get the source code.
  2. you have properly configured Mercurial for Mozilla.

These are the steps on the command line.

cd /your/path/mozilla-central
hg status
hg qpop -a
hg pull -u
cd layout/style
hg qnew bug1164783-spaces
# I'm using sublimetext. Choose a text editor.
subl forms.css
# checking if the diff makes sense
hg diff
# Adding a message to the patch
hg qrefresh -m "Bug 1164783 - Remove trailing spaces."
hg export qtip > ~/bug-1164783.patch
# creating another patch
hg qnew bug1164783-input
# doing the edits
hg diff
hg qrefresh -m "Bug 1164783 - Change default style for input type=search to box-sizing: border-box."
hg export qtip > ~/bug-1164783-input.patch

Once the modifications on the code are done and the patches are ready, you can attach the patch the bug on bugzilla.

Window for attaching bugs in bugzilla

  • Browse File for choosing the patch you created
  • select r? for the review and if you do not know who to ask for the review, there is usually a list of suggested reviewers.

If the the review is positive with r+ you can go on and add checkin-needed in the keywords of the bug. A committer will add your modifications to the next release of Firefox.

If the review is negative r-, read properly the comments which have been given in the review. Create another patch and attach it to the bug. At the same time, tick the box for the obsolete option on the previous patch.

The patch for input[search] should appear in Firefox 41.

Opening A Bug On HTML Specification

The input[type=search] default rendering was not defined on HTML specification. I opened a bug on W3C issue tracker so the specification could be fixed and if a group was implementing a new rendering engine, they would not have to reverse engineer the behavior of other browsers.

Otsukare!

John O'DuinnThe canary in the coal mine

After my recent “We are ALL Remoties” presentation at Wikimedia, I had some really great followup conversations with Arthur Richards at WikiMedia Foundation. Arthur has been paying a lot of attention to scrum and agile methodologies – both in the wider industry and also specifically in the context of his work at Wikimedia Foundation, which has people in different locations. As you can imagine, we had some great fun conversations – about remoties, about creating culture change, and about all-things-scrum – especially the rituals and mechanics of doing daily standups with a distributed team.

Next time you see a group of people standing together looking at a wall, and moving postit notes around, ask yourself: “how do remote people stay involved and contribute?” Taking photographs of the wall of postit notes, or putting the remote person on a computer-with-camera-on-wheeled-cart feels like a duct-tape workaround; a
MacGyver fix done quickly, with the best of intentions, genuinely wanting to help the remote person be involved, but still not-a-great experience for remoties.

There has to be a better way.

We both strongly agree that having people in different locations is just a way to uncover the internal communication problems you didn’t know you already have… the remote person is the canary in the coal mine. Having a “we are all remoties” mindset helps everyone become more organized in their communications, which helps remote people *and* also the people sitting near each other in the office.

Arthur talked about this idea in his recent (and lively and very well attended!) presentation at the Annual Scrum Alliance “Global Scrum Gathering” event in Phoenix, Arizona. His slides are now visible here and here.

If you work in an agile / scrum style environment, especially with a geo-distributed team of humans, it’s well worth your time to read Arthur’s presentation! Thought provoking stuff, and nice slides too!

Air MozillaBrown Bag Talk: Passwords and Login Problems

Brown Bag Talk: Passwords and Login Problems How can Mozilla improve the user experience around logins and passwords? Over the past six weeks, the Passwords team, joined by Amelia Abreu, has spent...

Air MozillaParticipation at Mozilla

Participation at Mozilla The Participation Forum

Air MozillaReps weekly

Reps weekly Weekly Mozilla Reps call

Air MozillaAssemblée Générale de l'ADULLACT

Assemblée Générale de l'ADULLACT 3 Conférences sur le logiciel libre suivies de l'AG de l'ADULLACT (rapports, présentation des comptes, approbation des comptes, budget prévisionnel, renouvellement de mandats, questions/réponses).

Mike Taylorposition&colon; fixed + overflow&colon; hidden + (plus some relative positioning and z-index stuff)

If you have an element with position: fixed inside of an element that has overflow: hidden, what's the expected rendering when you need to, uh, overflow? Should the inner fixpos element be clipped by its parent or not?

The spec appears to be pretty clear.

Fixed positioning is similar to absolute positioning. The only difference is that for a fixed positioned box, the containing block is established by the viewport.

So, according that spec text, the parent element's overflow shouldn't have any effect because the fixpos' parent element is the viewport.

Neat. But how do browsers behave? Open this testcase and have a look:

https://miketaylr.com/bzla/fixed-overflow-1.html

Everything behaves as the spec describes. overflow: hidden on the parent is ignored. High-fives all around.

Now if you throw in both a z-index: 1 (any number will do) and a position: relative on the parent element, things get...different.

https://miketaylr.com/bzla/fixed-overflow-2.html

Same as first testcase (what I would expect): Mobile + Desktop Firefox Mobile Chrome Desktop Edge

Different from first testcase: Mobile + Desktop Safari: fixpos element is clipped by parent (meaning overflow: hidden worked). Mobile + Desktop Opera (Presto): same as Safari Desktop Chrome: if the viewport is smaller than the containing parent, then overflow: hidden on the parent kicks in (resize the browser window to see it). Mobile + Desktop Opera (Blink): same as Chrome

And now, a 3rd testcase which adds user-scalable=no to the meta (viewport) element (the same effect happens if you constrain intial-scale and maximum-scale to 1.

https://miketaylr.com/bzla/fixed-overflow-3.html

The only browser this seems to make a difference in is Chrome on Android, which now clips the child element. I think I discovered this on accident.

So it seems like non-Edge and non-Firefox browsers treat a position: fixed element as a position: absolute element (or something?), when contained by a position: relative parent that also has a z-index set.

Unfortunately at least one site relies on this bug (see this comment).

If you happen to know why, send a self-addressed stamped envelope to twitter dot com slash miketaylr and let me know.

UPDATE

See https://twitter.com/gregwhitworth/status/609095284010852352 from @gregwhitworth. I'll try to write a follow-up post when I understand everything.

Air MozillaMozilla Hosts NewCo Silicon Valley with VP Platform Engineering David Bryant

Mozilla Hosts NewCo Silicon Valley with VP Platform Engineering David Bryant NewCo is a festival for innovation where innovative companies invite the community to their spaces to hear directly about the impact they are making.

Air MozillaQuality Team (QA) Public Meeting

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Air MozillaTech Talk: Shipping Firefox

Tech Talk: Shipping Firefox Lawrence Mandel will be speaking on shipping Firefox or how to push code from thousands of developers to hundreds of millions of users.

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Air MozillaThe Joy of Coding (mconley livehacks on Firefox) - Episode 18

The Joy of Coding (mconley livehacks on Firefox) - Episode 18 Watch mconley livehack on Firefox Desktop bugs!

Mozilla WebDev CommunityExtravaganza – June 2015

Once a month, web developers from across Mozilla get together to plot which C-level executive we’re going to take out next. Meanwhile, we find time to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Lazy-loading Fonts on MDN

First up was shobson with news about lazy-loading CSS fonts on MDN. The improvement allows users to see content immediately and avoids text flashing associated with displaying the fonts after they’ve loaded. The pull request is available for review for anyone interested in how it was achieved.

Lazy-loading Tabzilla on Air Mozilla

Next was peterbe, who told us about how Air Mozilla is now lazy-loading Tabzilla, the white Mozilla tab at the top of many of our websites. By not loading the extra code for the tab until the rest of the page has loaded, they were able to reduce the load time of the initial document by 0.7 seconds. Check out the bug for more info.

ContributeBot

pmac stopped by to show off ContributeBot, an Hubot script that reads contribute.json files and welcomes new visitors when the channel is quiet with information from the file. It gives new users info about where to find documentation, as well as pinging important people in the project to let them know that there’s someone new.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

pyelasticsearch 1.3

ErikRose wanted to let people know that pyelasticsearch 1.3 is out. It now has HTTPS support as well as new constructor arguments that help reduce the number of times you repeat yourself while using the library.

New Hires / Interns / Volunteers / Contributors

Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor.

Name Role Work
Gloria Dwomoh Outreachy Intern Air Mozilla
Michael Nolan Intern Air Mozilla
Peter Elmers Intern DXR

This was a… productive month.

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Mozilla Addons BlogAdd-ons Update – Week of 2015/06/10

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

  • Most nominations for full review are taking less than 10 weeks to review.
  • 233 nominations in the queue awaiting review.
  • Most updates are being reviewed within 7 weeks.
  • 101 updates in the queue awaiting review.
  • Most preliminary reviews are being reviewed within 10 weeks.
  • 268 preliminary review submissions in the queue awaiting review.

If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Firefox 39 Compatibility

The Firefox 39 compatibility blog post is up. The automatic compatibility validation will be run probably later this week.

Firefox 40 Compatibility

The Firefox 40 compatibility blog post is also coming up.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition (formerly known as Aurora) to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Extension Signing

We announced that we will require extensions to be signed in order for them to continue to work in release and beta versions of Firefox.

Yesterday I posted this update of where we are with signing on AMO. In a nutshell, all AMO extensions for Firefox that passed review have been signed, and all new versions will be signed once they pass review. We have enabled Unlisted extension submission, but it’s currently under testing, so expect some bugs. The major issues will be resolved in the coming week and we’ll make an announcement on this blog to indicate we’re ready for your submissions.

The wiki page on Extension Signing has information about its timeline, as well as responses to some frequently asked questions.

Electrolysis

Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running content code in a different process than browser code. This should improve responsiveness and overall stability, but it also means many add-ons will need to be updated to support this.

We will be talking more about these changes in this blog in the future. For now we recommend you start looking at the available documentation.

Adrian GaudebertRethinking Socorro's Web App

rewrite-cycle.jpg
Credits @lxt

I have been thinking a lot about what we could do better with Socorro's webapp in the last months (and even more, the first discussions I had about this with phrawzty date from Spring last year). Recently, in a meeting with Lonnen (my manager), I said "this is what I would do if I were to rebuild Socorro's webapp from scratch today". In this post I want to write down what I said and elaborate it, in the hope that it will serve as a starting point for upcoming discussions with my colleagues.

State of the Art

First, let's take a look at the current state of the webapp. According to our analytics, there are 5 parts of the app that are heavily consulted, and a bunch of other less used pages. The core features of Socorro's front-end are:

Those we know people are looking at a lot. Then there are other pages, like Crashes per User, Top Changers, Explosive Crashes, GC Crashes and so on that are used from "a lot less" to "almost never". And finally there's the public API, on which we don't have much analytics, but which we know is being used for many different things (for example: Spectateur, crash-stats-api-magic, Are we shutting down yet?, Such Comments).

The next important thing to take into account is that our users oftentimes ask us for some specific dataset or report. Those are useful at a point in time for a few people, but will soon become useless to anyone. We used to try and build such reports into the webapp (and I suppose the ones from above that are not used anymore fall into that category), but that costs us time to build and time to maintain. And that also means that the report will have to be built by someone from the Socorro team who has time for it, it will go through review and testing, and by the time it hits our production site it might not be so useful anymore. We have all been working on trying to reduce that "time to production", which resulted in the public API and Super Search. And I'm quite sure we can do even better.

Building Reports

bob-the-builder.jpg

Every report is, at its core, a query of one or several API endpoints, some logic applied to the data from the API, and a view generated from that data. Some reports require very specific data, asking for dedicated API endpoints, but most of them could be done using either Super Search alone or some combination of it with other API endpoints. So maybe we could facilitate the creation of such reports?

Let us put aside the authentication and ACL features, the API, the admin panel, and a few very specific features of the web app, to focus on the user-facing features. Those can be simply considered as a collection of reports: they all call one or several models, have a controller that does some logic, and then are displayed via a Django template. I think what we want to give our users is a way to easily build their own reports. I would like them to be able to answer their needs as fast as possible, without depending on the Socorro team.

The basic brick of a fresh web app would thus be a report builder. It would be split in 3 parts:

  • the model controls the data that should be fetched from the API;
  • the controller gets that data and performs logic on it, transforming it to fit the needs of the user;
  • and the view will take the transformed data and turn it into something pretty, like a table or a graph.

Each report could be saved, bookmarked, shared with others, forked, modified, and so on. Spectateur is a prototype of such a report builder.

We developers of Socorro would use that report system to build the core features of the app (top crashers, home page graphs, etc. ), maybe with some privileges. And then users will be able to build reports for their own use or to share with teammates. We know that users have different needs depending on what they are working on (someone working on FirefoxOS will not look at the same reports than someone working on Thunderbird), so this would be one step towards allowing them to customize their Socorro.

One Dashboard to Rule Them All

So users can build their own reports. Now what if we pushed customization even further? Each report has a view part, and that's what would be of interest to people most of the time. Maybe we could make it easy for a user to quickly see the main reports that are of interest to them? My second proposal would be to build a dashboard system, which would show the views of various reports on a single page.

A dashboard is a collection of reports. It is possible to remove or add new reports to a dashboard, and to move them around. A user can also create several dashboards: for example, one for Firefox Nightly, one for Thunderbird, one for an ongoing investigation... Dashboards only show the view part of a report, with links to inspect it further or modify it.

dashboard-example.png

An example of what a dashboard could look like.

Socorro As A Platform

The overall idea of this new Socorro is to make it a platform where people can find what they want very quickly, personalize their tool, and build whatever feature they need that does not exist yet. I would like it to be a better tool for our users, to help them be even more efficient crash killers.

I can see several advantages to such a platform:

  • time to create new reports is shorter;
  • people can collaborate on reports;
  • users can tweak existing reports to better fit their needs;
  • people can customize the entire app to be focused on what they want;
  • when you give data to people, they build things that you did not even dream about. I expect that will happen on Socorro, and people will come up with incredibly useful reports.

I Need Feedback

feedback-everywhere.jpg

Concretely, the plan would be to build a brand new app along the existing one. The goal won't be to replace it right away, but instead to build the tools that would then be used to replace what we currently have. We would keep both web apps side by side for a time, continuing to fix bugs in the Django app, but investing all development time in the new app. And we would slowly push users towards the new one, probably by removing features from the Django app once the equivalent is ready.

I would love to discuss this with anyone interested. The upcoming all-hands meeting in Whistler is probably going to be the perfect occasion to have a beer and share opinions, but other options would be fine (email, IRC... ). Let me know what you think!

Morgan Phillipsservice "Dockerized Firefox Builds" status

After 160 [work] days, 155 patches, and approximately 8,977 cups of coffee; I'm now wrapping up my third quarter with Mozilla -- the time has flown by! In these first eight months I've written some fun code, shaved enough yaks to knit new wool sweater, and become acquainted with the ghosts of Mozilla RelEng past, present, and future; to great effect.


"Behold child, 'tis buildbot"
Quarter one was spent getting familiar with Mozilla RelEng's legacy: re-writing/taking ownership of existing services like clobberer. The second was all about optimizing our existing infrastructure: principally by rolling out runner. Then, this one has been dedicated new beginnings: porting jobs from our old buildbot based CI infra to the shiny new TaskCluster based one.

In moving [Linux] jobs from buildbot to TaskCluster, I've worked on docker containers which will build Firefox with all of the special options that RelEng needs. This is really cool because it means developers can download our images and work within them as well, thus creating parity between our CI infrastructure and their local environments (making it easier to debug certain bugs). So, what's my status update?

The good news: the container for Linux64 jobs is in tree, and working for both Desktop and Android builds!

The better news: these new jobs are already working in the Try tree! They're hidden in treeherder, but you can reveal them with the little checkbox in the upper right hand corner of the screen. You can also just use this link: https://treeherder.mozilla.org/#/jobs?repo=try&exclusion_profile=false

# note: These are running alongside the old buildbot jobs for now, and hidden. The container is still changing a few times a week (sometimes breaking jobs), so the training wheels will stay on like this for a little while.

The best news: You can run the same job that the try server runs, in the same environment simply by installing docker and running the bash script below.

Bonus: A sister 32 bit container will be coming along shortly.



#!/bin/bash -e
# WARNING: this is experimental mileage may vary!

# Fetch docker image
docker pull mrrrgn/desktop-build:16

# Find a unique container name
export NAME='task-CCJHSxbxSouwLZE_mZBddA-container';

# Run docker command
docker run -ti \
--name $NAME \
-e TOOLTOOL_CACHE='/home/worker/tooltool-cache' \
-e RELENGAPI_TOKEN='ce-n-est-pas-necessaire' \
-e MH_BUILD_POOL='taskcluster' \
-e MOZHARNESS_SCRIPT='mozharness/scripts/fx_desktop_build.py' \
-e MOZHARNESS_CONFIG='builds/releng_base_linux_64_builds.py' \
-e NEED_XVFB='true' \
mrrrgn/desktop-build:16 \
/bin/bash -c /home/worker/bin/build.sh \
;


# Delete docker container
docker rm -v $NAME;

Andy McKayNot keeping up

When I started out doing web development, hundreds of years ago when CGI and Perl was a pretty neat thing, it felt like there were few tools for web development. Back in the day even formmail.pl was a pretty neat idea... until it turned out to be a pretty hideous security hole.

Over the years it has developed and grown and changed. And is changing faster than ever, there are more Javascript frameworks than days in the year. Or to put it another way:

It's not just the front end, how's the back end doing?

So I just need to split my simple CRUD app into 12 microservices, each with their own APIs which call each others' APIs but handle failure resiliently, put them into Docker containers, launch a fleet of 8 machines which are Docker hosts running CoreOS, "orchestrate" them using a small Kubernetes cluster running etcd, figure out the "open questions" of networking and storage, and then I continuously deliver multiple redundant copies of each microservice to my fleet. Is that it?

It's the future

Let's be honest. It's nuts. I can't keep up. I don't even know what keeping up means any more. Do I have to know everything?

For example, I looked at Angular, I thought meh, that doesn't seem worth it. I know I wasn't alone in that:

Instead I focused on playing with Go for a while which was fun. Now it looks like Angular is going the way of Dojo and we should be using React. In fact we are using React, and that seems pretty awesome. But just trying to keep up with learning how to build a project in React whilst doing everything else is a challenge.

This seems to be an industry where we don't focus. Peter Bengtsson said to me the other day these wise words: "Our professional lives isn't about building things. It's about worrying about not building things with certain tools".

And of course my professional life has a continual pressure about my next job. What do I need to learn to make sure I can get a job somewhere else? Because of course, this is IT, nothing is for ever.

A while ago I wrote this:

The problem is that all these things I'm not doing cause me stress. When I get to the end of the day, sometimes all I see is a lack of accomplishment and a list of things I haven't done.

Me

I'm not keeping up with all the new libraries, tools and things springing up all over the place. I never will be, this industry is growing and moving faster than anyone can keep with and it's accelerating in size and scope every day. Its awesome its improving so fast. If you don't get comfortable with this happening you are on a road to burn out.

Celebrate what you do know and admit that you too are probably not keeping up either and that's ok.

Mike HommeyPSA: mach now stores its log automatically

It’s been a frustration for a long time: oftentimes, you need to dig the log of the build or test run you just did, but didn’t redirect it. When you have enough backlog in your terminal, that can work itself out, but usually, what happens is that you rerun the command, redirecting it to a log file, hoping what you’re looking for will happen again.

For people working on the build system, it’s even worse because it involves someone else: someone comes and say “my build fails with $message”, and usually, thanks to parallel builds, the actual error message is buried deep in their terminal history… when they have a terminal history (Windows console, I’m looking at you).

Anyways, as of bug 985857, as long as you don’t already redirect mach‘s output to a file, it will now save it for you automatically.

At the moment, the feature is basic, and will store the last log of the last command (mostly). So if you run mach build followed by mach xpcshell-test, only the log for the latter will be available, and the log of the former will be lost.

The log is stored in the same format as mach uses when you give it the -l argument, which is an aggregate of many json pieces, and not very user friendly. Which is why mach now also has a new command to read those logs:

mach show-log

By default, it will display the last log mach stored, but you can also give it the path to any log you got out of mach with the -l argument.

So you can do either:

mach <command>
mach show-log

or

mach -l log-file.json <command>
mach show-log log-file.json

Note that show-log will spawn less automatically, so that you have paging and search abilities.

PomaxA look at thoughts that have come from working with React at the [Mozilla Foundation](https://www.mozilla.org/foundation), where we use it extensively for our [Webmaker Android app](https://github.com/mozilla/webmaker-android/)

First off, the single most important note about React I ever wrote.

Let me start this blog post with the most important thing about React, so that we understand why things happen the way they do:

If used correctly, your users will think they are manipulating a UI, when in fact they are manipulating React, which may then update the UI

Sounds simple, doesn't it? But it has a profound effect on how UI interactions work, and how you should be thinking about data flow in React applications. For instance, let's say we have a properly written React application that consists of a page with some text, and a slider for changing the text's opacity. I move the slider. What happens?

In traditional HTML, I move the slider, a change event is triggered, and if I had an eventListener hooked up for that, I could then do things based on that change.

React doesn't work like that

Another short and sweet sentence: React doesn't work with "changes to the UI" as you make them; React doesn't allow changes to the UI without its consent. Instead, React intercepts changes to the UI so they don't happen, then triggers the components that are tied to the UI the user thinks they're interacting with, so that those components can decide whether or not a UI update is necessary.

In a well written React application, this happens:

  1. I try to move the slider.
  2. The event is intercepted by React and killed off.
  3. As far as the browser knows, nothing has happened to that slider.
  4. React then takes the information about my UI interaction, and sends it to the component that owns the slider I tried to manipulate.
  5. If that component accepts my attempt at changing the UI, it will update its state such that it now renders in a way that makes it look identical to the traditional HTML case:
  6. As far as I'm concerned, as a user, I just moved the slider. Except in reality I didn't, my UI interaction asked React to have that interaction processed and that processing caused a UI update.

This is so different from traditional HTML that you're going to forget that. And every time you do, things will feel weird, and bugs might even be born. So, just to hopefully at least address that a tiny bit, once more:

If used correctly, your users will think they are manipulating a UI, when in fact they are manipulating React, which may then update the UI

Now then, try to remember this forever (I know, simple request, right?) and let's move on.

Revisiting the core concepts of modeling "your stuff" with React

If you've been working with React for a while it's easy to forget where "your mental construct of a thing" ends and where your UI components begin, and that makes it hard to reason about when to use React's state, when to use props, when to use instance variables, and when to offload things entirely to imported functionality objects. So, a quick refresher on the various bits that we're going to be looking at, and how to make use of them:

Your thing

This is an abstract idea, and generally breaks up into lots of tiny things that all need to "do something" to combine into a larger whole that actual humans like to think in. "A blog post", "A page", or "a markdown editor" all fall into this category. When thinking about "your thing", it's tempting to call the specific instantiation of everything that this thing needs "its state", but I'm going to have to be curt and tell you to not do that. At least, let's be specific: whenever we talk about the thing's state, let's call it "the full state". That way we won't get confused later. If it doesn't have "full" in the description, it's not your thing's abstract meta all encompassing state.

React components

These are extremely concrete things, representing UI elements that your users will interact with. Components need not map one-to-one to those abstract ideas you have in your head. Think of components as things with three levels of data: properties, state, and "plain old javascript stuffs".

Component properties: "this.props"

These are "constructor" properties, and are dictated by whoever creates an instance of the component. However, React is pretty clever and can deal with some situations in ways you may not expect coming at React from a traditional HTML programming paradigm. Let's say we have the following React XML - also known as JSX (This isn't HTML or even XML, it's just a more convenient way to write out programming intent, and maps directly to a React.createElement call. You can write React code without ever using JSX, and JSX is always first transformed back to plain JS before React runs it. Which is why you can get JavaScript errors "in your XML", which makes no sense if you still think that stuff you wrote really is XML):

Parent = React.createClass({
  render() {
    return (<div>
      <...>
      <Child content={ getCurrentChildContent() }/>
      <...>
    </div>);
  }
});

The child's position is always the same in this case, and when the Parent first renders, it will create this Child with some content. But this isn't what really happens. React actually adds a level of indirection between the code you wrote, and the stuff you see client-side (e.g. the browser, a native device, etc.): a VIRTUAL DOM has been created based on your JSX, and it is that VIRTUAL DOM that actually controls how things are changed client-side. Not your code. So the diffrence kicks in when we change the content that should be in that child and we rerender, to effect a new child:

  • Something happens to Parent that changes the output of getCurrentChildContent
  • Parent renders itself, which means the <Child> has changed.
  • React updates the VIRTUAL element associated with the Parent, and one of the updates is for the VIRTUAL Child element, which has a new property value
  • Rather than destroying the old Child and building a new one with the new property, React simply updates the VIRTUAL element so that it is indistinguishable from what things would have been had we destroyed and created anew.
  • the VIRTUAL DOM, once marked as fully updated, then reflects itself onto client the so that users see an updated UI.

The idea is that React is supposed to do this so fast you can't tell. And the reason React is so popular is that it actually does. React is fast. Really fast.

Where in traditional HTML you might remove(old) and then append(new), React will always, ALWAYS, first try to apply a "difference patch", so that it doesn't need to waste time on expensive construction and garbage collection. That makes React super fast, but also means you need to think of your components as "I am supplying a structure, and that structure will get updated" instead of "I am writing HTML elements". You're not.

Component state: "this.state"

This is the state of the React component. A react component that represents a piece of interactive text, for instance, will have that text bound as its state, because that state can be changed by the component itself. Components do not control what's in their props (beyond the limited 'use these default values for props that were not passed along during construction'), but they do control their state, and every update to the state triggers a render() call.

This can have some interesting side effects, and requires some extra thinking: If you have a text element, and you type to change that text, that change needs to be reflected to the state before it will actually happen.

Remember that important sentence from the start of the post:

If used correctly, your users will think they are manipulating a UI, when in fact they are manipulating React, which may then update the UI

And then let's look at what happens:

  • the user types a letter in what they think is a text input field of some sort
  • the event gets sent to React, which kills it off immediately so the browser never deals with it, and then sends it on to the component belonging to the VIRTUAL element that backs the UI that the user interacted with
  • the component handles the event by extracting the data and updating its state so that its text reflects the new text
  • the component renders itself, which updates the VIRTUAL element that backs the UI that the user sees, replacing its old text (pre-user-input) with the next text (what-the-user-thinks-they-wrote). This change is then reflected to the UI.
  • the user sees the updated content, and all of this happened so fast that they never even notice that all this happens behind the scenes. As far as they know, they simply typed a letter.

If we didn't use this state reflecting, instead this would happen:

  • user types a letter
  • React kills off the event to the VIRTUAL element
  • there is no handler to accept the event, extract its value, and update the component state, so:
  • nothing happens.

The user keeps hitting the keyboard, but no text shows up, because nothing changes in React, and so nothing changes in the UI. As such, state is extremely important to get right, and remembering how React works is of crucial importance.

Semantically refactored state: mixins

In additional to properties and state, React has a "mixin" concept, which allows you to write utility code that can tack into/onto any React class you're working with. For instance, let's look at an input component:

var Thing = React.createClass({
  getInitialState: function() {
    return { input: this.props.input || "" };
  },
  render: function() {
    return <input value={this.state.input} onChange={this.updateInput}/>
  },
  updateInput: function(evt) {
    this.setState({ input: evt.target.value }, function() {
      if (this.props.onUpdate) {
        this.props.onUpdate(this.state.input);
      }
    });
  }
});

Perfectly adequate, but if we have lots of components that all need to work with inputs, we can also do this:

var inputMixin = {
  getInitialState: function() {
    return {
      input: this.props.input || ""
    };
  },
  updateInput: function(evt) {
    this.setState({ input: evt.target.value }, function() {
      if (this.props.onUpdate) {
        this.props.onUpdate(this.state.input);
      }
    });
  }
};

var Thing = React.createClass({
  mixins: [ inputMixin ],
  render: function() {
    return <input value={this.state.input} onChange={this.updateInput}/>
  },
});

We've delegated the notion of input state tracking and UI handling to a "plain JavaScript" object. But, one that hooks into React's lifecycle functions, so even though we define the state variable input in the mixin, the component will end up owning it and this.state.input anywhere in its code will resolve just fine.

Mixins allow you to, effectively, organise state and behaviour in a finer-grained way than just components allow. Multiple components that have nothing in common with respects to your abstract model can be very efficiently implemented by looking at which purely UI bits they share, and modeling those with single mixins. Less repetition, smaller components, better control.

Of course, it gets tricky if you refer to a state variable that a mixin introduces outside of that mixin, so that's a pitfall: ideally, mixins capture "everything" so that your components don't need to know they can do certain things, "they just work". As such, I like to rewrite the previous code to the following, for instance:

var inputMixin = {
  getInitialState: function() {
    return {
      input: this.props.input || ""
    };
  },
  updateInput: function(evt) {
    this.setState({ input: evt.target.value }, function() {
      if (this.props.onUpdate) {
        this.props.onUpdate(this.state.input);
      }
    });
  },
  // JSX generator function, so components using this mixin don't need to
  // know anything about the mixin "internals".
  generateInputJSX: function() {
    return <input value={this.state.input} onChange={this.updateInput}/>
  }
};

var Thing = React.createClass({
  mixins: [ inputMixin ],
  render: function() {
    return (
      <div>
        ...
        { this.generateInputJSX() }
        ...
      </div>
    );
  },
});

Now the mixin controls all the things it needs to, and the component simply relies on the fact that if it's loaded a mixing somethingsometingMixin, it can render whatever that mixin introduces in terms of JSX with a call to the generateSomethingsomethingJSX function, which will do the right thing. If the state for this component needs to be saved, saving this.state will include everything that was relevant to the component and the mixin, and loading the state in from somewhere with a setState(stateFromSomewhere()) call will also do the right thing.

So now we can have two completely different components, such as a "Portfolio" component and a "User Signup" component, which have absolutely nothing to do with each other, except that they will both need the UI and functionality that the inputMixin can provide.

(Note that while it is tempting to use Mixins for everything, there is a very simple criterium for whether or not to model something using mixins: does it rely on hooking into React class/lifecycle functions like getInitialState, componentDidUpdate, componentWillUnmount, etc.? If not, don't use a mixin. If you just want to put common functions in a mixin, don't. Just use a library import, that's what they're for)

Instance variables and externals

These things are handy for supporting the component, but as far as React is concerned they "don't matter", because updates to them do nothing for the UI unless there is extra code for manually triggering a state change. And you can't trigger a state change on an instance variable, state changes happen through setState and property updates by parents.

That said, React components are just plain JavaScript, so there is nothing preventing you from using the same JS constructs that we use outside of React:

var library = require("libname");
var Thing = React.createClass({
  mixins: [
    require("somemixin"),
    require("someothermixin")
  ],
  getInitialState: function() {
    this.elements = library.getStandardList();
    return { elements: this.elements };
  },
  addElement: function(e) {
    this.elements.push(e);
    this.setState({ elements: this.elements });
  },   
  render: function() {
    return this.state.elements.map(...);
  }
});

Perfect: in fact, using instance variables sometimes drastically increases legibility and ease of development, such as in this example. Calling addElement() several times in rapid succession, without this.elements, has the potential to lose state updates, effectively doing this:

  1. var l1 = this.state.elements; + l1.push(e) + setState({ elements: l1 });
  2. var l2 = this.state.elements; + l2.push(e) + setState({ elements: l2 });
  3. var l3 = this.state.elements; + l3.push(e) + setState({ elements: l3 });

Now, if l3 is created before setState for l2 has finished, then l3 is going to be identical to l1, and after it's set, l2 could be drop over it, losing us data twice!

Instance variables to the rescue.

Static properties on the component class

Finally, components can also be defined with a set of static properties, meaning they exist "on the class", not on specific instances:

var Thing = React.createClass({
  statics: [
    mimetypes: require("mimetypes")
  ],
  render() {
    return <div>I am a { this.props.type }!</div>;
  }
});

var OtherThing = React.createClass({
  render: function() {
    <Thing type={ Thing.mimetypes.font } />
  }
});

Of course like all good JS, statics can be any legal JS reference, not just primitives, so they can be objects or functions and things will work quite well.

Back to React: hooking up components

The actual point of this blog post, in addition to the opener sentence, was to look at how components can be hooked up, by choosing how to a) model state ownership, b) model component interactions, and c) data propagation from one component to another.

This is going to be lengthy (but hopefully worth it) so let's just do this the itemized list way and work our way through. We have two lists:

State ownership:

  1. centralized ownership
  2. delegated ownership
  3. fragmented ownership
  4. black box ownership

Component interactions:

  1. Parent to Child
  2. Parent to Descendant
  3. Child to Parent
  4. Child to Ancestor
  5. Sibling to Sibling

Data propagation:

  1. this.props chains
  2. targeted events using publish/subscribe
  3. blind events broadcasting

So I'm going to run through these, and then hopefully at the end tie things back together by looking at which of these things work best, and why I think that is the case (with which you are fully allowed to disagree and we should talk! Talking is super useful).

Deciding on State Ownership

Centralized ownership

The model that fits the traditional HTML programming model best is the centralized approach, where one thing "owns" all the data, and all changes go through it. In our editor app, we can model this as one master component, "Parent", with two child components, "Post" and "Editor", which take care of simply showing the post, and editing the post, respectively.

Out post will consist of:

var marked = require("marked");
var Post = React.createClass({
  render: function() {
    var innerHTML = {
      dangerouslySetInnerHTML: {
        __html: marked(this.props.content);
      }
    };
    return <div {...innerHTML}/>;
  }
});

Our editor will consist of:

var tinyMCE = require("tinymce");
var Editor = React.createClass({
  render: function() {
    var innerHTML = {
      dangerouslySetInnerHTML: {
        __html: tinymce({
          content: this.props.content,
          updateHandler: this.onUpdate
        });
      }
    };
    return <div {...innerHTML}/>;
  },
  onUpdate: function(evt) {
    this.props.onUpdate(evt);
  }
});

And our parent component will wrap these two as:

var Parent = React.createClass({
  getInitialState: function() {
    return {
      content: "",
      editing: false
    };
  },

  render: function() {
    return (<div className="post">
      <Post   hidden={this.state.editing}  content={this.state.content} onClick={this.switchToEditor}/>
      <Editor hidden={!this.state.editing} content={this.state.content} onUpdate={this.onUpdate}/>
    </div>);
  },

  // triggered when we click the post
  switchToEditor: function() {
    this.setState({
      editing: true
    });
  },

  // Called by the editor component
  onUpdate: function(evt) {
    this.setState({
      content: evt.updatedContent,
      editing: false
    });
  }
});

In this setup, the Parent is the lord and master, and any changes to the content must run through it. Saving and loading of the post to and from a data repository would, logically, happen in this Parent class. When a user clicks on the post, the "hidden" flag is toggled, which causes the Parent to render with the Editor loaded instead of the Post, and the user can modify the content to their heart's content. Upon completion, the Editor uses the API that the Parent passed down to ensure that its latest data gets reflected, and we return to the Post view.

The important question is "where do we put save and load", and in this case that choice is obvious: in Parent.

var staterecorder = {
  componentWillMount: function() {
    this.register(this, function loadState(state) {
      this.setState(state);
    });
  },

  register

  componentWillUnmount: function() {
    this.unregister(this);
  },
}

var Parent = React.createClass({
  mixins: [
    require("staterecorder")
  ]

  getInitialState: function() {
    ...
  },

  getDefaultProps: function() {
    return { id: 1};
  },

  render: function() {
    ...
  },

  ...
});

But: why would the Parent be in control? While this design mirrors our "abstract idea", this is certainly not the only way we can model things. And look closely: why would that Post not be the authoritative source for the actual post? After all, that's what we called it. Let's have a look at how we could model the idea of "a Post" by acknowledging that our UI should simply "show the right thing", not necessary map 1-on-1 to our abstract idea.

Delegated state management

In the delegated approach, each component controls what it controls. No more, no less, and this changes things a little. Let's look at our new component layout:

Out post is almost the same, except it now controls the content, and as such, this is now its state and it has an API function for updating the content if a user makes an edit (somehow) outside of the Post:

var marked = require("marked");
var database = require("database");
var Post = React.createClass({
  getInitialState: function() {
    content: ""
  },

  componentWillMount: function() {
    database.getPostFor({id : this.props.id}, function(result) {
      this.setState({ content: result });
    };
  },

  render: function() {
    var innerHTML = {
      dangerouslySetInnerHTML: {
        __html: marked(this.props.content);
      }
    };
    return <div {...innerHTML}/>;
  },

  setContent: function(newContent) {
    this.setState({
      content: newContent
    });
  }
});

Our editor is still the same, and it will do pretty much what it did before:

var tinyMCE = require("tinymce");
var Editor = React.createClass({
  render: function() {
    var innerHTML = {
      dangerouslySetInnerHTML: {
        __html: tinymce({
          content: this.props.content,
          updateHandler: this.onUpdate
        });
      }
    };
    return <div {...innerHTML}/>;
  },
  onUpdate: function(evt) {
    this.props.onUpdate(evt);
  }
});

And our parent, however, has rather changed. It no longer controls the content, it is simply a convenient construct that marries the authoritative component, with some id, to an editor when the user needs it:

var Parent = React.createClass({
  getInitialState: function() {
    return {
      editing: false
    };
  },

  render: function() {
    return (<div className="post">
      <Post   hidden={this.state.editing}  id={...} onClick={this.switchToEditor}/>
      <Editor hidden={!this.state.editing} onUpdate={this.onUpdate}/>
    </div>);
  },

  // triggered when we click the post
  switchToEditor: function() {

    ??????

    this.setState({
      editing: true
    });
  },

  // Called by the editor component
  onUpdate: function(evt) {
    this.setState({
      editing: false
    }, function() {
      this.refs.setContent(evt.newContent);
    });
  }
});

You may have spotted the question marks: how do we now make sure that when we click the post, we get its content loaded into the editor? There is no convenient "this.props" binding that we can exploit, so how do we make sure we don't duplicate things all over the place? For instance, the following would work, but it would also be a little ridiculous:

var Parent = React.createClass({
  getInitialState: function() {
    return {
      editing: false,
      localContent: ""
    };
  },

  render: function() {
    return (<div className="post">
      <Post   ref="post" hidden={this.state.editing}  id={...} onClick={this.switchToEditor}/>
      <Editor hidden={!this.state.editing} content={this.state.localContent} onUpdate={this.onUpdate}/>
    </div>);
  },

  bindContent: function(newContent) {
    this.setSTate({
      localContent: newContrent
    });
  },

  // triggered when we click the post
  switchToEditor: function() {
    this.setState({
      editing: true
    });
  },

  // Called by the editor component
  onUpdate: function(evt) {
    this.setState({
      editing: false
    }, function() {
      this.refs.post.setContent(evt.newContent);
    });
  }
});

We've basically turned the Parent into a surrogate Post now, again with its own content state variables, even though the set out to eliminate that. This is not a path to success. We could try to circumvent this by linking the Post to the Editor directly in the function handlers:

var Parent = React.createClass({
  getInitialState: function() {
    return {
      editing: false
    };
  },

  render: function() {
    return (<div className="post">
      <Post   ref="post" id={...} hidden={this.state.editing} onClick={this.switchToEditor}/>
      <Editor ref="editor" hidden={!this.state.editing} onUpdate={this.onUpdate}/>
    </div>);
  },

  // triggered when we click the post
  switchToEditor: function() {
    this.refs.editor.setContent(this.refs.post.getContent(), function() {
      this.setState({
        editing: true
      });
    });
  },

  // Called by the editor component
  onUpdate: function(evt) {
    this.setState({
      editing: false
    }, function() {
      this.refs.post.setContent(evt.newContent);
    });
  }
});

This might seem better, but we've certainly not made the code easier to read by putting in all those async interruptions...

Fragmenting state across the UI

What if we took the genuinely distributed approach? What if we don't have "a Parent", with the Post and Editor being, structurally, sibling elements? This would certainly rule out the notion of duplicated state, but also introduces the issue of "how do we get data from the editor into the post":

var Post = React.createClass({
  getInitialState: function() {
    content: ""
  },

  componentWillMount: function() {
    database.getPostFor({id : this.props.id}, function(result) {
      this.setState({ content: result });
    };

    somewhere.listenFor("editor:update", this.setContent);
  },

  render: function() {
    var innerHTML = {
      dangerouslySetInnerHTML: {
        __html: marked(this.props.content);
      },
      onClick: this.onClick
    };
    return <div {...innerHTML}/>;
  },

  onClick: function() {
    // somehow get an editor, somewhere, to open...
    somewhere.trigger("post:edit", { content: this.state.content });
  },

  setContent: function(newContent) {
    this.setState({
      content: newContent
    });
  }
});

The obvious thing to notice is that the post now needs to somehow be able to trigger "an editor", as well as listen for updates.

var Editor = React.createClass({
  componentWillMount: function() {
    somewhere.listenFor("post:edit", function(evt) {
      this.contentString = evt.content;
    });
  },

  render: function() {
    var innerHTML = {
      dangerouslySetInnerHTML: {
        __html: tinymce({
          content: this.contentString,
          updateHandler: this.onUpdate
        });
      }
    };
    return <div {...innerHTML}/>;
  },
  onUpdate: function(evt) {
    somewhere.trigger("editor:update", evt);
  }
});

Again, this seems less than ideal. While the Post and Editor are now nice models, we're spending an aweful lot of time in magical-async-event-land, and as designers, programmers, and contributors, we basically have no idea what's going on without code diving.

Remember, you're not just writing code for you, you're also writing code for people you haven't met yet. We want to make sure we can onboard them without going "here are the design documents and flowcharts, if you see anything you don't understand, please reach out and good luck". We want to go "here's the code. It's pretty immediately obvious how everything works, just hit F3 in your code editor to follow the function calls".

Delegating all state to an external black box

There is one last thing we can try: delegating all state synchronizing to some black box object that "knows how to state, yo". For instance, a database interfacing thing through which we perform lookups and save/load all state changes. Of all the options we have, this is the one that is absolutely the most distributed, but it also comes with some significant drawbacks.

var api = {
  save: function(component, state) {
    // update our data store, and once that succeeds, update the component
    datastore.update(component, state).success(function() {
      component.setState(state);
    });
  }
};

var Post = React.createClass({
  ...
  componentWillMount: function() {
    api.load(this, function(state) {
      this.setState(state);
    });
  },
  setContent: function(newContent) {
    api.save(this, {
      content: newContent
    });
  }
});

This seems pretty handy! we don't update our UI until we know the datastore has the up to date state, so are application is now super portable, and multiple people can, in theory, all work on the same data. That's awesome, free collaboration!

The downside is that this is a UI blocking approach, meaning that if for some reason the data store fails, components won't get updated despite there being no technical reason for that to happen, or worse, the data store can be very slow, leading to actions the user took earlier conflicting with their current actions because the updates happen while the user's already trying to make the UI do something else.

Of course, we can reverse the order of commits and UI updates, but that introduces an even harder problem: invalidating the UI if it turns out the changes cannot be committed. While the api approach has neat benefits, they rely on your infrastructure being reliable, and fast. If that cannot be guaranteed, then contacting a data store for committing states manually may be a better solution because it limits the store interactions to bootstrapping (i.e. loading previously modified components) and user-initiated synchronization (save buttons, etc).

Dealing with Component Relations

Parent to Child: construction properties

This is the classic example of using construction properties. Typically the parent should never tell the Child to do things via API calls or the like, but simply set up need property values, so that the Child can do "whatever it needs to do based on those".

Parent to Descendant: a modeling error

In React, this relationship is essentially void. Parents should only be concerned about what their children look like, and nothing else. If there are things hanging under those children, those things should be irrelevant to the Parent. If they're not, this is a sign that the choice of which components map to which abstract concepts was not thought out well enough (yet), and needs redoing.

Child to Parent: this.props

Children can trigger behaviour in their Parents as long as the Parent supplies the API to do so via construction properties. If a Parent has an API for "doing something based on a Child doing something", that API can be passed into along during Child construction in the same way that primitive properties are passed in. React's JSX is just 'JavaScript, with easier to read syntax' so the following:

render: function() {
  return <Child content={this.state.content} onUpdate={this.handleChildUpdate}/>;
}

is equivalent to:

render: function() {
  return React.createElement("Child", {
    content: this.state.content,
    onUpdate: this.handleChildUpdate
  });
}

And the child can call this.props.onUpdate locally whenever it needs the Parent to "do whatever it needs to do".

Child to Ancestor: a modeling error

Just like how Parents should not rely on descendants, only direct children, Children should never care about their Ancestors, only their Parents. If the Child needs to talk to its ancestor, this is a sign that the choice of which components map to which abstract concepts was, again, not thought out well enough (yet), and needs redoing.

Sibling to Sibling:

As "intuitive" as it might seem for siblings to talk to each other (after all, we do this in traditional HTML setting all the time), in React the notion of "siblings" is irrelevant. If a child relies on a sibling to do its own job, this is yet another sign that the choice of which components map to which abstract concepts was not thought out well enough (yet), and needs redoing.

Deciding on how to propagate data

Chains of this.props.fname() function calls

The most obvious way to effect communication is via construction properties (on the Parent side) and this.props (on the Child side). For simple Parent-Child relationships this is pretty much obvious, after all it's what makes React as a technology, but what if we have several levels of components? Let's look at a page with menu system:

Page → menu → submenu → option

When the user clicks on the option, the page should do something. This feels like the option, or perhaps the submenu, should be able to tell the Page that something happened, but this isn't entirely true: the semantics of the user interaction changes at each level, and having a chain of this.props calls might feel "verbose", but accurately describes what should happen, and follows React methodology. So let's look at those things:

var Options = React.createClass({
  render: function() {
    return <li onClick={this.props.optionPicked}>{ this.props.label }</li>;
  }
});

var Submenu = React.createClass({
  render: function() {
    var options = this.props.options.map(function(option) {
      return <Option key={option.label}
                     label={option.label}
                     optionPicked={function() { this.select(option.label); }} />  
    });
    return <ul className={ ["menu", this.menuName].join(" ") }>{ options }</ul>;
  },
  select: function(label) {
    this.props.
  }
});

var Menu = React.createClass({
  render: function() {
    var submenus = this.props.menus.map(function(menu) {
      return <Submenu key={menu.name}
                      menuName={menu.name}
                      options={menu.options}
                      onOptionChosen={function(option) { this.select(menu.name, option); }}/>
    });
  },
  select: function(menu, option) {
    this.props.onSelect(menu, option);
  }
});

var Page = React.createClass({
  render: function() {
    return (<div>
      <Header>
        <Menu menus={require("menusystem")} onSelect={this.navigate}/>
      </Header>
      { this.formSections() }
      <Footer ... />
    </diV>);
  },
  navigate: function(category, topic) {
    // load in the appropriate section for the category/topic pair given.
  }
});

At each stage, the meaning of what started with "a click" changes. Yes, ultimately this leads to some content being swapped in in the Page component, but that behaviour only matters inside the Page component. Inside the menu component, the important part is learning what the user picked as submenu and option, and communicating that up. Similarly, in the Submenu the important part is known which option the user picked. Contrast this to the menu, where it is also important to know which submenu that "pick" happened in. Those are similar, but different, behaviours. Finally in the Option, the only thing we care about is "hey parent: the user clicked us. Do something with that information".

"But this is arduous, why would I need to have a full chain when I know that Menu and Submenu don't care?" Well, for starters, they probably do care, because they'll probably want to style themselves when the user picks an option, such that it's obvious what they picked. It's pretty unusual to see a straight up, pass-along chain of this.props calls, usually a little more happens at each stage.

But what if you genuinely need to do something where the "chain" doesn't matter? For instance, you need to have any component be able to throw "an error" at an error log or notifying component that lives "somewhere" in the app and you don't know (nor care) where? Then we need one of the following two solutions.

Targeted events using the Publish/Subscribe model

The publish/subscribe model for event handling is the system where you have a mechanism to fire off events "at an event monitor", who will then deliver (copies of) that event to anyone who registered as a listener. In Java, this is the "EventListener" interface, in JavaScript's it's basically the document.addEventListener + document.dispatch(new CustomEvent) approach. Things are pretty straight forward, although we need to make sure to never, ever use plain strings for our event names, because hot damn is that asking for bugs once someone starts to refactor the code:

var EventNames = require("eventnames");

var SomeThingSomewhere = React.createClass({
  mixins: [
    pubsub: require(...)
  ],
  componentWillMount: function() {
    if (retrieval of something crucial failed) {
      this.pubsub.generate(EventNames.ERROR, {
        msg: "something went terribly wrong",
        code: 13
      }); 
    }
  },
  render: function() {
    ...
  },
  ...
});

var ErrorNotifier = React.createClass({
  mixins: [
    pubsub: require(...)
  ],
  getInitialState: function() {
    return { errors: [] };
  },
  componentWillMount: function() {
    pubsub.register(EventNames.ERROR, this.showError);
  },
  render() {
    ...
  },
  showError: function(err) {
    this.setState({
      errors: this.state.errors.slice().concat([err])
    });
  }
});

We can send off error messages into "the void" using the publish/subscribe event manager, and have the ErrorNotifier trigger each time an error event comes flying by. The reason we can do this is crucial: when a component has "data that someone might be able to use, but is meaningless to myself" then sending that data off over an event manager is an excellent plan. If, however, the data does have meaning to the component itself, like in the menu system above, then the pub/sub approach is tempting, but arguably taking shortcuts without good justification.

Of course we can take the publish/subscribe model one step further, by removing the need to subscribe...

Events on steroids: the broadcasting approach

In the walkie-talkie method of event management, events are sent into the manager, but everybody gets a copy, no ifs, no buts, the events are simply thrown at you and if you can't do anything with them, then you ignore them, a bit like a bus or taxi dispatcher, when everyone's listening in on the same radio frequency, which is why in the Flux pattern this kind of event manager is called the Dispatcher.

A Dispatcher pattern simplifies life by not needing to explicitly subscribe for specific events, you just see all the events fly by and if you know that you need to do something based on one or more of them, you just "do your job". The downside of course is that there will generally be more events that you don't care about than events that you do, so the Dispatcher pattern is great for applications with lots of independent "data generators" and "consumers", but not so great if you have a well modelled application, where you (as designer) can point at various components and say what they should reasonably care about in terms of blind events.

You promised to circle back, so: what should I go with?

Perhaps not surprisingly, I can't really tell you, at least not with authority. I have my own preferences, but need trumps preference, so choose wisely.

If you're working with React, then depending on where you are in your development cycle, as well as learning curve, many of the topics covered are things you're going to run into, and it's going to make life weird, and you'll need to make decisions on how to proceed based on what you need.

As far as I'm concerned, my preference is to "stick with React" as much as you can: a well modeled centralized component that maintains state, with this.props chaining to propagate and process updates, letting render() take care of keeping the UI in sync with what the user thinks they're doing, dipping sparingly into the publish/subscribe event model when you have to (such as a passive reflector component, like an error notifier that has no "parent" or "child" relationships, it's just a bin to throw data into).

I also prefer to solve problems before they become problems by modeling things in a way that takes advantage of everything it has to offer, which means I'm not the biggest fan of the Dispatcher model, because it feels like when that becomes necessary, an irreparable breakdown of your model has occurred.

I also don't think you should be writing your components in a way that blocks them from doing the very thing you use React for: having a lightning fast, easy to maintain user interface. While I do think you should be saving and syncing your state, I have strong opinions on "update first, then sync" because the user should never feel like they're waiting. The challenge then is error handling after the fact, but that's something you generally want to analyse and solve on a case-by-case basis.

I think you should use state to reflect the component state, no more, no less, and wherever possible, make that overlap with the "full state" that fits your abstract notion of the thing you're modeling; the more you can props-delegate, and the less you need to rely on blind events, the better off your code base is going to be. Not just for you, but also for other developers and, hopefully, contributors.

And before closing, an example: implementing editable elements.

Let's look at something that is typical of the "how do we do this right?" problem: editable forms. And I mean generic forms, so in this case, it's a form that lets you control various aspects of an HTML element.

This sounds simple, and in traditional HTML, sort of is simple: set up a form with fields you can change, tie their events to "your thing"s settings, and then update your thing based on user interaction with the form. In React things have to necessarily happen a little differently, but to the user it should feel the same. Change form → update element.

Let's start with something simple: the element. I know, React already has pre-made components for HTML elements, but we want a freely transformable and stylable one. In abstract, we want something like this:

element:
  attributeset:
  - src
  - alt
  - title  
  transform:
  - translation
  - rotation
  - scale
  - origin
  styling:
  - opacity
  - border

Which, at a first stab, could be the following React component:

var utils = require("handyHelperUtilities");

var Element = React.createClass({
  getInitialState: function() {
    return utils.getDefaultElementDefinition(this.props);
  }, 
  render: function() {
    var CSS = utils.convertToCSS(this.state);
    return (<div style={CSS}>
      <img src={this.state.src} alt={this.state.alt} title={this.state.title}/>
    </div>);
  }
});

But, does that make sense? Should this component ever be able to change its internal state? Yes, the abstract model as expressed as, for instance, a database record would certainly treat "changed data" as the same record with new values but the same id, but functionally, the component is just "expressing a bunch of values via the medium of a UI component", so there isn't actually any reason for these values to be "state", as such. Let's try this again, but this time making the Image a "dumb" element, that simply renders what it is given:

var utils = require("handyHelperUtilities");

var Element = React.createClass({
  getInitialProps: function() {
    return utils.getDefaultElementDefinition(this.props);
  },
  render: function() {
    var CSS = utils.convertToCSS(this.props);
    return (<div style={CSS}>
      <img src={this.props.src} alt={this.props.alt} title={this.props.title}/>
    </div>);
  }
});

Virtually identical, but this is a drastically different thing: instead of suggesting that the values it expresses is controlled by itself, this is simply a UI component that draws "something" based on the data we pass it when we use it. But we know these values can change, so we need something that does get to manipulate values. We could call that an Editor, but we're also going to use it to show the element without any editorial options, so let's make sure we use a name that describes what we have:

var EditableElement = React.createClass({
  getInitialState: function() {
    return ...?
  },
  componentWillMount: function() {
    ...?
  },
   render: function() {
    var flatProperties = utils.flatten(this.state);
    return <Element {...flatProperties}/>;
  }
});

Let's build that out: we want to be able to edit this editable element, so let's also write an editor:

var utils = require(...) = {
  ...
  generateEditorComponents: function (properties, updateCallback) {
    return properties.map(name => {
      utils.getEditorComponent(name, properties[name], updateCallback);
    });
  },
  getEditorComponent: function(name, value, updateCallback) {
    var Controller = utils.getReactComponent(name);
    return <Controller value={value} onUpdate={updateCallback} />;
  },
  ...
};

var Editor = React.createClass({
  render: function() {
    return (<div>
      {utils.generateEditorComponents(this.props, this.onUpdate)}
    </div>);
  },
  onUpdate: function(propertyLookup, newValue) {
    ...?
  }
});

Now: how do we get these components linked up?

EditableElement → (Editor, Image)

The simplest solution is to rely on props to just "do the right thing", with updates triggering state changes, which trigger a render() which will consequently just do the right thing some more:

var EditableElement = React.createClass({
  ...
  render: function() {
    var flatProperties = utils.flatten(this.state);
    flatProperties.onUpdate = this.onUpdate;
    return (<div>
      { this.state.editing ? <Editor {...flatProperties}/> : <Element {...flatProperties}/> }
    </div>);
  },
  onUpdate: function(propName, newValue) {
    var curState = this.state;
    var updatedState = utils.update(curState, propName, newValue);
    this.setState(updatedState);
  }
});

In fact, with this layout, we can even make sure the Editor has a preview of the element we're editing:

var Editor = React.createClass({
  render: function() {
    return (<div>
      <div className="preview">
        <Element {...this.props}/>
      </div>
      <div className="controls">
        {utils.generateEditorComponents(this.props, this.onUpdate)}
      </div>
    </div>);
  },
  onUpdate: function(propertyLookup, newValue) {
    this.props.onUpdate(propertyLookup, newValue);
  }
});

Excellent! The thing to notice here is that the EditableElement holds all the strings: it decides whether to show a plain element, or the editor-wrapped version, and it tells the editor that any changes it makes, it should communicate back, directly, via the onUpdate function call. If an update is sent over, the EditableElement updates its state to reflect this change, and the render chain ensures that everything "downstream" updates accordingly.

Doesn't that mean we're updating too much?

Let's say the Editor has a slider for controlling opacity, and we drag it from 1.0 to 0.5. The Editor calls this.props.onUpdate("opacity", 0.5), which makes the EditableElement call setState({opacity: 0.5}), which calls render(), which sees an update in state, which means React propages the new values to the Editor, which sees an update in its properties and so calls its own render(), which then redraws the UI to match the exact same thing as what we just turned it into. Aren't we wasting time and processing on this? We're just getting the Editor's slider value up into the Element, we don't need a full redraw, do we?

Time to repeat that sentence one more time:

If used correctly, your users will think they are manipulating a UI, when in fact they are manipulating React, which may then update the UI

In redux, this means we did not first change that slider to 0.5, and so we definitely need that redraw, because nothing has changed yet! You're initiating a change-event that React gets, after which updates may happen, but the slider hasn't updated yet. React takes your requested change, kills it off as far as the browser is concerned, and then forwards the "suggestion" in your event to whatever handles value changes. If those changes get rejected, nothing happens. For example, if our element is set to ignore opacity changes, then despite us trying to drag the opacity slider, that slider will not budge, no matter how much we tug on it.

Extended editorial control

We can extend the editor so that it becomes more and more detailed, while sticking with this pattern. For instance, say that in addition to the simple editing, we also want some expert editing: there's some "basic" controls with sliders, and some "expert" controls with input fields:

var SimpleControls = React.createClass({
  render: function() {
    return utils.generateSimpleEditorComponents(this.props, this.onUpdate);
  }
});

var ExpertControls = React.createClass({
  render: function() {
    return utils.generateExpertEditorComponents(this.props, this.onUpdate);
  }
});

var Editor = React.createClass({
  render: function() {
    return (<div>
      <div className="preview">
        <Element {...this.props}/>
      </div>
      <div className="controls">
        <SimpleControls properties={this.props} onUpdate={this.onUpdate}/>
        <ExpertControls properties={this.props} onUpdate={this.onUpdate}/>
      </div>
    </div>);
  },
  onUpdate: function(propertyLookup, newValue) {
    this.props.onUpdate(propertyLookup, newValue);
  }
});

Done. The Editor is still responsible for moving data up to the EditableElement, and the simple vs. expert controls simply tap into the exact same properties. If the parent is rendered with updates, they will "instantly" propagate down.

And that's it...

If you made it all the way to the bottom, I've taken up a lot of your time, so first off: thanks for reading! But more importantly, I hope there was some new information in this post that helps you understand React a little better. And if there's anything in this post that you disagree with, or feel is weirdly explained, or know is outright wrong: let me know! I'm not done learning either!

Mozilla Security BlogChanges to the Firefox Bug Bounty Program

The Bug Bounty Program is an important part of security here at Mozilla.  This program has paid out close to 1.6 million dollars to date and we are very happy with the success of it.  We have a great community of researchers who have really contributed to the security of Firefox and our other products.

Those of us on the Bug Bounty Committee did an evaluation of the Firefox bug bounty program as it stands and decided it was time for a change.

First, we looked at how much we award for a vulnerability.  The amount awarded was increased to $3000 five years ago and it is definitely time for this to be increased again.  We have dramatically increased the amount of money that a vulnerability is worth.  On top of that, we took a look at how we decided how much we should pay out.  Rather than just one amount that can be awarded, we are moving to a variable payout based on the quality of the bug report, the severity of the bug, and how clearly the vulnerability can be exploited.

Finally, we looked into how we decide what vulnerability is worth a bounty award.  Historically we would award $3000 for vulnerabilities rated Critical and High.  Issues would come up where a vulnerability was interesting but was ultimately rated as Moderate.  From now on, we will officially be paying out on Moderate rated vulnerabilities.  The amount that is paid out will be determined by the committee, but the general range is $500 to $2000.  This doesn’t mean that all Moderate vulnerabilities will be awarded a bounty but some will.

All of these changes can be found on our website here: here

Another exciting announcement to make is the official release of our Firefox Security Bug Bounty Hall of Fame!  This page has been up for a while but we haven’t announced it until now.  This is a great place to find your name if you are a researcher who has found a vulnerability or if you want to see all the people who have helped make Firefox so secure.

We will be making a Web and Services Bug Bounty Hall of Fame page very soon. Keep an eye out for that!

https://www.mozilla.org/en-US/security/bug-bounty/hall-of-fame/

Feel free to mail us at security@mozilla.com with any questions!

Honza BambasNew Gecko performance tool: Backtrack

Backtrack aims to show a complete code path flow from any point back to its source, crossing asynchronous callbacks, threads, processes, network requests, timers and any kind of implementation specific queuing plus capturing any I/O or mutex blockade.  The ‘critical flow execution path’ is put to a context of all the remaining concurrent execution flows.  It’s then easy to examine how the critical flow is blocked and delayed by concurrent tasks.

The work is tracked in this bug, where you also find patches and build instructions.  There is also an add-on that, in Backtrack enabled builds, allows you to view actual captured data.

Click the screenshot bellow to view an interactive previewIt’s capture of load of my blog main page till the first-paint notification (no e10s and no network predictor to demonstrate the capture capabilities.)

backtrack-preview-1

Backtrack combines*) Gecko Profiler and Task Tracer.

Gecko Profiler (PSP) provides instrumentation (already spread around the code base) to capture static call stacks.  I’ve enhanced the PSP instrumentation to also capture objects (i.e. 'this' pointer value) and added a simple base class to easily monitor object life time (classes must be instrumented.)

Task Tracer (TT) on the other hand provides a generic way to track back on runnables – but not on e.g. network poll results, network requests or implementation specific queues.  It was easy to add a hook into the TT code that connects the captured object inter-calls with information about runnables dispatch source and target.

The Backtrack experimental patch:

  • Captures object lifetime (simply add ProfilerTracked<class Derived> as a base class to track the object lifetime and class name automatically)
  • Annotates objects with resource names (e.g URI, host name) they work with at run-time
  • Connects stack and object information using the existing PROFILER_LABEL_FUNC instrumentation recording this pointer value automatically ; this way it collects calls between objects
  • Measures I/O and mutex wait time ; an object holding a lock can be easily found
  • Sticks receipt of a particular network response exactly to its actual request transmission (here I mainly mean HTTP but also applies to connect() and DNS resolution)
  • Joins network polling “ins” and “outs”
  • Binds code-specific queuing and dequeuing, like our DNS resolver, HTTP request queues.  Those are not just ‘dispatch and forget’ like nsIEventTarget and nsIRunnable but rather have priorities, complex dequeue conditions and may not end up dispatched to just a single thread.  These queues are very important from the resource scheduling point of view.

Followups:

  • IPC support, i.e. cross also processes
  • Let the analyzes also mark anything ‘related’ for achieving a selected path end (e.g. my favorite first-paint time and all CSS loads involved)
  • Probably persist the captured raw logs and allow the analyzes be done offline

Disadvantages: just one – significant memory consumption.

*) The implementation is so far not deeply bound to SPS and TT memory data structures.  I do the capture my own – actually a third data collection, side by SPS and TT.  I’m still proving the concept this way but if found useful and bearable to land in this form as a temporary way of collecting the data, we can optimize and cleanup as a followup work.

The post New Gecko performance tool: Backtrack appeared first on mayhemer's blog.

Joel MaherPlease welcome the Dashboard Hacker team

A few weeks ago we announced that we would be looking for committed contributors for 8+ weeks on Perfherder.  We have found a few great individuals, all of whom show a lot of potential and make a great team.

They are becoming familiar with the code base and are already making a dent in the initial list of work set aside.  Let me introduce them (alphabetical order by nicks):

akhileshpillai – Akhilesh has jumped right in and started fixing bugs in Perfherder.  He is new to Mozilla and will fit right in.  With how fast he has come up to speed, we are looking forward to what he will be delivering in the coming weeks.  We have a lot of UI workflow as well as backend data refactoring work on our list, all of which he will be a great contributor towards.

mikeling – mikeling has been around Mozilla for roughly two years, recently he started out helping with a few projects on the A*Team.  He is very detailed oriented, easy to work with and is willing to tackle big things.

theterabyte – Tushar is excited about this program as an opportunity to grow his skills as a python developer and experiencing how software is built outside of a classroom.  Tushar will get a chance to grow his UI skills on Perfherder by making the graphs and compare view more polished and complete, while helping out with an interface for the alerts.

Perfherder will soon become the primary dashboard for all our performance needs.

I am looking forward to the ideas and solutions these new team members bring to the table.  Please join me in welcoming them!


Joel MaherPlease join me in welcoming the DX Team

A few weeks ago, I posted a call out for people to reach out and commit to participate for 8+ weeks.  There were two projects and one of them was Developer Experience.  Since then we have had some great interest, there are 5 awesome contributors participating (sorted by irc nicks).

BYK – I met BYK 3+ years ago on IRC- he is a great person and very ambitious.  As a more senior developer he will be focused primarily on improving interactions with mach.  While there are a lot of little things to make mach better, BYK proposed a system to collect information about how mach is used.

gma_fav – I met gma_fav on IRC when she heard about the program.  She has a lot of energy, seems very detail oriented, asks good questions, and brings fresh ideas to the team!  She is a graduate of the Ascend project and is looking to continue her growth in development and open source.  Her primary focus will be on the interface to try server (think the try chooser page, extension, and taking other experiments further).

kaustabh93 – I met Kaustabh on IRC about a year ago and since then he has been a consistent friend and hacker.  He attends university.  In fact I do owe him credit for large portions of alert manager.  While working on this team, he will be focused on making run-by-dir a reality.  There are two parts: getting the tests to run green, and reducing the overhead of the harness.

sehgalvibhor – I met Vibhor on IRC about 2 weeks ago.  He was excited about the possibility of working on this project and jumped right in.  Like Kaustabh, he is a student who is just finishing up exams this week.  His primary focus this summer will be working in a similar role to Stanley in making our test harnesses act the same and more useful.

stanley – When this program was announced Stanley was the first person to ping me on IRC.  I have found him to be very organized, a pleasure to chat with and he understands code quite well.  Coding and open source are both new things to Stanley and we have the opportunity to give him a great view of it.  Stanley will be focusing on making the commands we have for running tests via mach easier to use and more unified between harnesses.

Personally I am looking forward to seeing the ambition folks have translate into great solutions, learning more about each person, and sharing with Mozilla as a whole the great work they are doing.

Take a few moments to say hi to them online.


Air MozillaMartes mozilleros

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

David BurnsWebDriver Specification - Have you read it lately?

A lot of work has gone into the WebDriver Specification this year. The methods in there have had a major make over to make them more specific in the steps that are required as well as having the relevant links. Go have a read of it and feel free to raise bugs against it, we will be updating it quite regularly. You can see all the work that is happening on Github. We do everything via pull requests so you can read things before they land.

My team have also been working hard at making sure that our implementation is following the specification and are making some great leaps with it. I will be releasing a development version of the python bindings soon that use the httpd, like InternetExplorerDriver and ChromeDriver, to drive the browser. Currently our httpd only works against Nightly but there is a merge to Aurora happening soon when we will be sending out links for you to start playing with it all. I am actually looking forward to some of the feedback that we get about it.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1170922] unable to collapse comments — they bounce back open
  • [1146772] alternative comment ordering isn’t supported (newest first, etc)
  • [1146781] implement the “ignore bug mail” checkbox
  • [1170796] making the Rank field appear for more products
  • [1170414] Redirection loop in *.cgi when path-info is present
  • [1171074] Removing a custom field does not clear config memcache
  • [1170179] Do not automatically set ‘firefox-affected’ release-tracking flags for comm-central products.
  • [1171471] Product Mozilla Services should be Cloud Services
  • [1146775] implement comment collapse all and by tag
  • [1146779] add search result navigation
  • [1171818] link fields to descriptions (where appropriate)
  • [1170004] Pressing back after a bug submission error leaves the “Save Changes” button greyed out
  • [1171576] In attachment-conversion script, preserve extra, non-MozReview flags
  • [1171826] sort keywords by “starts with” then “substring”
  • [1172386] create a “feature” for the elasticsearch metrics and ensure its dependencies are tagged
  • [1171963] account/email/confirm-new.html.tmpl should describe the current password complexity setting to new user just like the forgotten password form
  • [1172649] Create New Attachment comment previews are not working due to Error: TypeError: YAHOO.util.Connect is undefined

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Mike ConleyThings I’ve Learned This Week (June 1 – June 5, 2015)

How to get an nsIGlobalObject* from a JSContext*

I’m working on a patch for bug 1116188 to make gathering profiles from subprocesses asynchronous. In order to do that, I’m exposing a new method on nsIProfiler called getProfileDataAsync that is returning a DOM Promise. What’s interesting about this is that I’m returning a DOM Promise from C++! 1

In order to construct a DOM Promise in C++, I need to hand it something that implements nsIGlobalObject. I suspect that this helps the Promise determine which memory region that it belongs to.

My new method gets a JSContext* because I’ve got the [implicit_jscontext] bit about the method definition in the nsIProfiler.idl file… so how do I go about turning that into an nsIGlobalObject?

Here’s the maneuver:

// Where aCX is your JSContext*:
nsIGlobalObject* go = xpc::NativeGlobal(JS::CurrentGlobalOrNull(aCx));

That will, as the name suggests, return either an nsIGlobalObject*, or a nullptr.

Resolving a DOM Promise from C++ with a JS Object

For my patch for bug 1116188, it’s all well and good to create a DOM Promise, but you have to resolve or reject that Promise for it to have any real value.

In my case, I wanted to take a string, parse it into a JS Object, and resolve with that.

Resolving or rejecting a DOM Promise in Javascript is pretty straight-forward – you’re given back resolve / reject function, and you just need to call those with your results and you’re done.

In C++, things get a little hairier. As I discovered in my most recent episode of The Joy of Coding, conditions need to be right in order for this to work out.

Here’s what I ended up doing (I’ve simplified the method somewhat to remove noise):

void
ProfileGatherer::Finish()
{
  AutoJSAPI jsapi;
  jsapi.Init();
  JSContext* cx = jsapi.cx();
  JSAutoCompartment ac(cx, mPromise->GlobalJSObject());

  // Now parse the JSON so that we resolve with a JS Object.
  JS::RootedValue val(cx);
  {
    UniquePtr<char[]> buf = mWriter.WriteFunc()->CopyData();
    NS_ConvertUTF8toUTF16 js_string(nsDependentCString(buf.get()));
    MOZ_ALWAYS_TRUE(JS_ParseJSON(cx, static_cast<const char16_t*>(js_string.get()),
                                 js_string.Length(), &val));
  }
  mPromise->MaybeResolve(val);
}

The key parts here are getting the AutoJSAPI on the stack, initting it, gettings its JSContext, and then putting the JSAutoCompartment on the stack. Note that I had to pass not only the JSContext, but the global JS Object for the Promise as well – I suspect that’s, again, to ensure that the right compartment is being entered. Otherwise, I start failing assertions like crazy.

Note that the code above is by no means perfect – I’m missing error handling functions for when the JSON parsing goes wrong. In that case, I should probably reject the Promise instead. bz pointed me to a good example of that going on here in Fetch.cpp:

      if (!JS_ParseJSON(cx, decoded.get(), decoded.Length(), &json)) {
        if (!JS_IsExceptionPending(cx)) {
          localPromise->MaybeReject(NS_ERROR_DOM_UNKNOWN_ERR);
          return;
        }

        JS::Rooted<JS::Value> exn(cx);
        DebugOnly<bool> gotException = JS_GetPendingException(cx, &exn);
        MOZ_ASSERT(gotException);

        JS_ClearPendingException(cx);
        localPromise->MaybeReject(cx, exn);
        return;
      }

      localPromise->MaybeResolve(cx, json);
      return;

I’ll probably end up doing something similar in the next iteration of my patch.


  1. I learned how to do that a few weeks back

Kevin NgoAccess React Component Props from an Event Handler

I got the green light to go trigger -happy on imagining a fresh codebase for future Firefox Marketplace projects. So I threw in some React, ES6, Flux, and Browserify, and got a stew going. Although amidst all of the modern cutting-edge frameworks and tools, sometimes we forget the basics. I ran into a problem where I needed to access a component's props from an event handler (onSubmit), but had shortly stumbled on finding a reference.

React.createClass({
  onSubmit: e => {
    // this is undefined.
    this.props.flux.getActions('my-actions').go();
  },
  render() {
    return <form onSubmit={this.onSubmit}/>
  }
}

For a brief second, this left me scratching my head on how to access the flux instance. But duh, closures! Among the first things you learn about Javascript, a function has access to a snapshot of the parent scope from where it is declared.

React.createClass({
  render() {
    const onSubmit = e => {
      // this.props is live!
      this.props.flux.getActions('my-actions').go();
    };
    return <form onSubmit={onSubmit}/>
  }
}

Web development is entering a grand age, and I don't even know where to start talking. This topic is a bit lame compared to the tons of fresh patterns and cool ideas I've been stumbling all over. But despite being a shiny world, I'll always have the fundamentals on hand.

Jeff MuizelaarIntel driver crash of the day

In bug 1170143 we ran into an Intel driver crash when trying to share DXGI_FORMAT_A8_UNORM surfaces. Older Intel drivers crash while opening the texture using OpenShareHandle. The driver successfully opens BGRA surfaces, but not alpha surfaces which we want to use for video playback. Who knows why... Here's a test case.

Christian HeilmannUA Sniffing issue: Outdated PageSpeed sending WebP images to Microsoft Edge

PageSpeed by Google is a great way to add clever performance enhancements to your site without having to do a lot by hand. Not surprisingly, a lot of people used it when it came out. Sadly enough, people then don’t upgrade it when Google does which means there are a lot of sub-optimal installations out there.

This wouldn’t be an issue, if the old versions didn’t use user agent sniffing to try to detect a browser, which leads to a lot of false positives.

Dogs sniffing each others backsidesFigure 1: User Agent Sniffing in action

One of these false positives is that Microsoft Edge Mobile is detected as Chrome, which means that PageSpeed converts images to WebP. MS Edge does not support WebP, which is why you’ll have broken images:

broken images on faz.net

The fix: upgrade PageSpeed

The fix is easy: just upgrade your PageSpeed to the latest version as the team moved on from UA Sniffing. There should not be any backwards compatibility issues. Upgrading can be done via package managers on Apache, but with NGINX, it requires compilation. Version 1.8 was the first version that turned on WebP transcoding by default. Version 1.9 fixed it by making sure it worked off of accept header rather than UA string.

How to test if your server does it right

If you want to test if a server does the right thing (which is using accept headers instead of UA sniffing), use MS Edge.

A quick and dirty way is also to change your user agent string to the following and surf to the site. This is effectively doing reverse sniffing, so it is OK to detect falsy detection scripts, but not a good idea to do real capability/interoperability testing.

Mobile UA String for Edge (please, don’t use for sniffing)

Mozilla/5.0 (Windows Phone 10; Android 4.2.1; Microsoft; NOKIA) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.143 Mobile Safari/537.36 Edge/12.0

Desktop UA String for Edge (please, don’t use for sniffing)

Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36 Edge/12.0

You can do this in most developer tools (I use the User Agent Switcher extension in Firefox which is also available for Chrome). If you are on Windows/IE or MS Edge, you can go to the F12 developer tools and change the browser profile to “phone”.

Got it fixed? Thanks! Tell us so we can praise you

If you upgraded and fixed this interop issue, feel free to ping me or @MSEdgeDev and we’ll be happy! Let’s fix the web, one bad sniff at a time.

Air MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

About:CommunityMDN Contributor of the Month for May 2015: Saurabh Nair

Congratulations to Saurabh Nair, who is the MDN Contributor of the Month for May 2015. He was selected from among the MDN contributors who received an MDN Star badge in May for significant contributions to MDN.

Photo of Saurabh Nair

Saurabh has been contributing to MDN since 2011, and became more active in the last year. He was one of the participants at the Hack on MDN weekend in Berlin earlier this year. He is on the “spam watch” team, who look out for spam pages, deleting them and banning the spammers as soon as they appear. Since he lives in India, he can do this while MDN staff members in Europe and North America are sleeping.

Here is an interview with Saurabh, conducted via email:

When and how did you get started contributing to MDN?

I started referring to MDN docs in 2011 when I started my web development career. Whenever I found typos or errors, I used to correct them. I would also add links where more clarification was required and such. But it was only in 2014 that I began to spend more time on MDN, still mostly for my own learning, but doing editorial reviews, etc. Around that time I got involved with the MDN community also, which is a really nice bunch of people.

The first full article I wrote on MDN was on a new CSS property called will-change, under guidance from Jean-Yves Perrier. Writing it was a great learning experience and finishing it was gratifying. I’ve written a couple other full articles since then and looking forward to write more. Also, it fills me with pride every time I see one of the articles I wrote translated by someone I don’t know to some language I can’t read. Seriously, I feel like Shakespeare right about then. :)

How does what you do on MDN affect other parts of your life, or vice versa?

I’m a web developer by profession, and very interested in the happenings around web technologies. Correctness and clarity are very much stressed at MDN. For instance, I once witnessed a discussion on the #mdn IRC channel about whether to use the word “updated” or “overwritten” in an article. The difference was subtle in the case, but it still mattered, and was valued and debated. I know for a fact that working on MDN has improved my knowledge and in turn improved the quality of my office work. Also I got to meet a number of great people through MDN, and that has positively affected my personal life too.

And in reverse, being a professional web developer has made it kind of easy and natural for me to work on related things on MDN.

What advice do you have for new contributors on MDN?

Everything you do is valued, whether it is tagging articles, doing reviews, writing, or just about anything. If you like technical writing and learning about new web technologies, helping with MDN is going to be a really rewarding experience.

QMOFirefox 39 Beta 3 Testday Results

Hello mozillians!

As you already may know, last Friday – June 5th – we held another Testday event, this time for Firefox 39 Beta 3.

We’d like to take this opportunity to thank everyone for getting involved in the proposed testing activities and in general, for helping us make Firefox better.

Many thanks are due again to our very active Bangladesh QA Community (Nazir Ahmed Sabbir, Rezaul Huque Nayeem, Rakibul Islam Ratul, Md.Ehsanul Hassan and Mohammad Maruf Islam), Aleksej, and kenkon for their extraordinary efforts and contributions, and to all our moderators. Thanks a bunch!

Keep an eye on QMO for future events! 😉

Mozilla Open Policy & Advocacy BlogAnnouncing the 2015 Ford-Mozilla Open Web Fellows

Building and supporting leaders equipped to protect the open Web is a core part of Mozilla’s advocacy strategy. After a comprehensive, worldwide search, we are excited to introduce the 2015 Ford-Mozilla Open Web Fellows!

Paola Paola Villarreal | Americal Civil Liberties Union, Massachusetts
Tim Tim Sammut | Amnesty International
Andrea Andrea Del Rio | Association for Progressive Communications
Drew Drew Wilson | Free Press
Gem Gem Barrett | Open Technology Institute
Tennyson Tennyson Holloway | Public Knowledge

Find out more about what they will be doing in 2015.

The 2015 class represents the diversity of the Web — they come from around the world and bring skills ranging from security analysis and digital campaigning to games and apps development. Each Fellow will spend 10 months immersed in a host organization, working to advance Mozilla’s mission in policy areas including privacy, access, expression and more.

Learn about the competitive selection process.

The Open Web Fellows program — a collaboration between the Ford Foundation and Mozilla — is an international program designed to engage developers, engineers, technologists, programmers and other webmakers with civil society organizations around the world to help protect the Internet. The Fellowship program is designed to create an ecosystem that will strengthen and build leaders dedicated to advancing the free and open Web.

During the 10-month Fellowship, this next generation of leaders will be immersed in projects that address key needs with respect to digital freedom. The Fellowships will produce better technical understanding among civil society and government policy-making bodies, a stronger affirmative agenda that creates better public awareness and understanding of Internet policy issues, and improved cross-sector coordination. Fellows will serve as mentors, advisors and ambassadors, helping to develop a better-informed global policy discussion.

The 2015 class of Ford-Mozilla Open Web Fellows are working with some of the world’s leading organizations to help ensure the public policy climate safeguards the Internet as a shared, global and public resource. In this inaugural year, Fellows are embedded within six organizations: the American Civil Liberties Union (ACLU), Amnesty International, the Association for Progressive Communications (APC), Free Press, Open Technology Institute and Public Knowledge.

Please join us in welcoming the 2015 class of Ford-Mozilla Open Web Fellows. We are eager to see their impact in the months to come.

Robert O'CallahanIsrael, Part 1

I just got back today from an amazing trip to Israel. I was invited by the Technion to speak at their TCE 2015 conference. So I did that, added a few days for sight-seeing since I don't like to travel halfway around the world for just a couple of days of work, and also had a full day in Hong Kong on the way there.

I've never been outside the airport in Hong Kong before, so I was excited to finally do that. I took the tram up Tai Ping Peak, walked the Lugard Road circuit and back down to the city, then spent the afternoon at Cheung Chau island. It was incredibly hot and humid, and I was so busy looking for drinks and icy desserts that I forgot to eat proper food until the end of the day. I enjoyed the coastal track on Cheung Chau island, through jungle and the back streets of villages, though the Cheng Po Tsai "pirate cave" was a bit disappointing --- comparable sea caves at Piha don't even have names :-).

Checking into my El Al flight I experienced the famous El Al security. The very nice security man asked me endless questions, including topics like what I did in the USA and who I knew there from the Middle East. It all went well, aided by the Technion's letter, but I was nervous and felt a bit of an urge to confess to something (anything!) to make the man happy. Two local security officers with very large guns were watching the whole area and I wondered what happens if you accidentally give a wrong answer...

Early Sunday morning I got a taxi from Ben Gurion airport to Haifa. Excitingly, the taxi driver drove at 130km/h most of the way, with one hand on the steering wheel --- because he only had one arm. I stayed at the Colony Hotel, which was generally excellent and who very generously checked me into a room at 6am! Later in the morning I walked around the waterfront and up the front side of the famous Mount Carmel, then to the Bahai Gardens tour with my friend Nimrod --- amazing. We had a late, long lunch with a couple of other conference people and I spent the rest of the day relaxing and working in my room.

An unexpected blessing of my trip was the food. I've never had much Middle Eastern/Mediterranean food before --- there's not much of it here in Auckland --- so I had a lot of very good food that's new to me. I will never think about hummus the same way again.

The TCE conference was held on Monday and Tuesday. It was an eclectic group of speakers, but most of them were very good at giving talks accessible to a broad audience, and I really enjoyed hearing a broad range of computer science talks like I used to in my grad student days at CMU. There were very interesting talks on complexity theory, databases, architecture, compilers, testing, verification, and more. One of the speakers was my former CMU officemate Bianca --- an unexpected pleasure to meet again in such an exotic location! Having been mostly out of the academic circuit for a long time, I felt many waves of imposter syndrome, and so I was rather nervous leading up to my talk. It all seemed to mysteriously come right as soon as I got started --- I'll chalk that up to God and prayer. Without a clock or a proper practice run, even the timing was perfect; we had time for a couple of questions before a wailing siren cut us off --- a scheduled national-defense drill. I think we were supposed to flee to a shelter but most people just went to get coffee. I had a great time talking to researchers at the conference, and learned a lot of useful information.

On Monday night we had a speakers' dinner at the Technion, on a balcony with a view towards the Golan (and on a clear day, Mount Hermon, apparently). It was sobering to think of the brutal civil war raging just beyond our vision. On Tuesday night I had one of the most extraordinary meals of my life at a Druze village near Haifa --- a dozen large side dishes, most new to me; a huge main dish consisting of kebab meats, rice and cheese, with a layer of bread baked on top; and an irresistable kanafe dessert. I had no desire or need to eat anything the whole of the following day --- which I'll cover in the next post.

Dave HuntJoining Web QA

Dylan with origami foxI’m excited to announce that as of last week I am officially on Mozilla’s Web QA team! Despite working closely with the team since I started at Mozilla over four years ago, I’ve always reported to another team. Originally I had a hybrid role, where I reported to the Director of QA and assisted with automation of both Web and Desktop products. Later, we formed the QA Automation Services team, which existed to provide automation services to any QA team. This was mostly absorbed into the A-Team, which shares a lot of the same goals but is not just focused on QA. During my time with the A-Team a lot of my work started to shift towards Firefox OS, so it made sense during a organisational shift towards vertical stacks for me to officially join the Firefox OS automation team.

Many times since I started at Mozilla I’ve felt that I had more to offer the Web QA team, and I’ve always been a keen contributor to the team. I can’t say it was an easy decision to move away from Firefox OS, as it’s a terrifically exciting project, but the thought of joining Web QA just had me bursting with enthusiasm! In a time where there’s been a number of departures, I’m pleased to say that I feel like I’m coming home. Look out web – you’re about to get even more automated!

Last week Stephen Donner interviewed me on being a Mozilla Web QA contributor. You can read the blog post over on the Web QA blog.

I’d like to take this opportunity to thank Stephen Donner, Matt Evans, Clint Talbert, Jonathan Griffin, and James Lal for their support and leadership. I’ve made so many great friendships at Mozilla, and with our Whistler work week just around the corner I’m so looking forward to catching up with many of them in person!

Mozilla Release Management TeamFirefox 39 beta2 to beta3

A reasonable beta in term of number of changesets accepted. Mostly some stability fixes and minor improvements.

  • 35 changesets
  • 92 files changed
  • 905 insertions
  • 518 deletions

ExtensionOccurrences
cpp21
json17
h15
py9
js6
jsm3
build3
sh2
in2
xul1
xhtml1
list1
ini1
html1
c1

ModuleOccurrences
js22
b2g17
dom12
testing7
toolkit6
layout4
browser4
widget3
netwerk3
gfx3
modules2
mobile1
ipc1
build1

List of changesets:

Boris ZbarskyBug 1168207. Be a bit more careful with overflow checking in XHR. r=baku a=lizzard - cf279fc867d5
Andrea MarchesiniBug 1166924 part 0 r=bent a=lizzard - 36bf5bcceb27
Wes KocherBug 1166924 part 1 r=baku a=lizzard - 528d47773256
Milan SreckovicBug 1166082: Check if the lock succeeded before using it. r=bschouten a=lizzard - 15a7bce855fe
Ben TurnerBug 1163109 - Restrict the resource:// weirdness in workers to loads from a system principal. r=bzbarsky, a=lizzard - f49be454944b
Gijs KruitboschBug 995697 - Remove obsolete geolocation private browsing test. r=jdm, a=test-only - 4b0c26bdab21
Tim NguyenBug 1166867 - Support -moz-os-version: windows-win10. r=jimm, a=sledru - e712de047451
Seth FowlerBug 1163740 - Paper over qcms crashes due to NaN values in qcms_transform::matrix. r=dmajor, a=sledru - 9463b7173dd9
Joel MaherBug 1162753 - Update talos. r=wlach, a=test-only - 5c4d05edc7cf
Francois MarierBug 1167493 - Application Reputation: disable remote lookup of zip files on Mac/Linux. r=gcp, a=sledru - 80cde01ed15c
Maire ReavyBug 1137057 - Interactive Intelligence added to screensharing whitelist. r=jesup, a=sledru - 415472b5a680
Mats PalmgrenBug 1003441 - Check that the new frame is also a nsSubDocumentFrame (i.e. that Init has picked up the detached views). r=roc, a=sledru - abcd95bb7212
Honza BambasBug 1122420 - Improve after-shutdown dispatch assertion on CacheIOThread. r=michal, a=sledru - 082a23c146bd
Chris ManchesterBug 1169798 - Refresh the marionette server's window reference when switching between windows to avoid intermittent exception.;r=ato a=test-only - d8904a3f0278
Jan de MooijBug 1160884 - Add KeepAlive instructions after elements/slots uses. r=nbp, a=abillings - 6d8c0c9dc553
Ted MielczarekBug 1162060 - Add Socorro auth token to Android and B2G mozconfigs. r=mshal, a=NPOTB - adbf7c8af745
Ted MielczarekBug 1162060 - Add socorro auth token to B2G device build mock environments. r=bhearsum, a=NPOTB - 683cbf33c92a
Terrence ColeBug 1156045 - Only print jstests assertion output on failure. r=sfink, a=test-only - b8a4ed23ed26
Tooru FujisawaBug 1162456 - Part 1: Add --format option in jstests.py and jit_test.py as a replacement for --tinderbox. r=sfink, a=test-only - 9d6eae8f4c6e
Tooru FujisawaBug 1162456 - Part 2: Use --format=automation instead of --tinderbox. r=sfink, a=test-only - ec01909d713e
Tooru FujisawaBug 1169199 - Check platform argument on linux SpiderMonkey shell build and specify target and host on 32bit build. r=sfink, a=test-only - 05122c19b3d7
Tooru FujisawaBug 1155985 - Set FieldInto::mType just before storing to reserved slot. r=jonco, a=abillings,lizzard - 04e07d5a9b00
Patrick McManusBug 1144602 - test_rel_preconnect orange. r+a=test-only - 499efe6e8a4b
Dave TownsendBug 1168954 - Unexpected "Install" button appears on add-on install progress doorhanger. r=dao, a=lizzard - bb5ac2094352
Aaron KlotzBug 1133351 - Part 1: Make Windows IPC play nicely with COM STA marshaling. r=bsmedberg, a=lizzard - 8f1677195e6f
Aaron KlotzBug 1133351 - Part 2: Use SetWinEventHook to detect OLE Window. r=bent, a=lizzard - 59793d7e1b7e
James WillcoxBug 1167197 - Fix GMPProvider on Android r=cpearce a=lizzard - e2ed03987d19
Liz HenryPost Beta 3 for short beta cycle: disable EARLY_BETA_OR_EARLIER a=lizzard - 49e75ecb84f8
Daniel HolbertBug 765078 - Give SMIL time events a longer grace period to fire, in test_smilTimeEvents.xhtml. a=test-only - c271d2f06862
Nihanth SubramanyaBug 1163559 - Search engine icons are always displayed at low resolution. r=florian, a=lizzard - 8ecfee4a2185
Edwin FloresBug 1160101 - Disable Adobe EME by default; enable only in Mozilla builds. r=glandium, a=lizzard - 516ef88d8790
Aaron KlotzBug 1151318 - Add quirks flag to help Unity plugin release mouse capture. r=jimm, a=lizzard - a75365b95a17
Andrea MarchesiniBug 1169867 - nsXMLHttpRequest should use and free mProxy correctly. r=ehsan, a=abillings - f16daa2effd1
Andrea MarchesiniBug 1166900 - Better string length check in nsZipArchive::GetDataOffset. r+a=dveditz - 634d32969bd6
Andrea MarchesiniBug 1167888 - Better string length check in nsZipArchive::BuildFileList. r=smaug, a=dveditz - 62bb5056f458

Karl DubostOld CSS syntax with prefixes

Working on Web Compatibility issues for Japan is a kind of nightmare. The two biggest offenders are first versions of WebKit flexbox and gradient. Basically Web developers hurried to use flexbox for their Web sites, but never dared fixing it when the new stable syntax had been released.

  1. Web Developers used CSS prefixes to support browsers
  2. Browsers have to support Web sites because of their outdated syntax.

When we do not update our site to include the stable standardized syntax, we do a couple of things.

  1. We force browsers to keep old code around (good luck to WebKit/Blink for deprecating old flexbox and/or gradients)
  2. We force browsers to add code to support old css on Web sites (Mozilla and Microsoft are forced to support the -webkit- syntax for giving the users Web sites which are working)
  3. We give users a broken user experience when the browsers are not fixing the site for them.
  4. The Web site owners (Web agency clients) get possibly bad press for providing a site not working in certain browsers.

Conclusion: Nobody wins!

Otsukare!

Graeme McCutcheonOn funkierJS... and other, more serious matters

Note: this is a temporary static site. The full site will be restored once I get some domain hosting issued resolved.

I occasionally flirt with functional programming. Well to tell you the truth, its more of a vague attempt to repent for a misspent youth. After all, I studied at Glasgow: a university that happens to know a thing or two about putting a Haskell compiler together. It is to my eternal shame that I took just one course in functional programming, albeit supplemented by a later reacquaintance in a project involving SECD machines and (shudders) Java.

In recent years, the Javascript world has belatedly caught up to Mr Eich's incredible foresight in designing a language with first-class functions. Hey, we can—to an extent—program functionally too! Libraries like Underscore and LoDash have enjoyed a meteoric rise. However, as has been noted by wiser heads than I, their APIs make composability harder due to the positioning of parameters. The preponderance of optional parameters and different signatures worries me too. Other libraries go some way to fixing that, but add currying and partial application as an afterthought, and frequently don't provide as full an API as Underscore and co., leaving you with an awful lot of scaffolding to build up before you can do anything useful.

This has left me reinventing some functional wheels once too often. But if there's one thing our Nodefathers have taught us, it's that the world can always use another Javascript library. So, with a certain inevitability, enter stage left funkierJS.

funkierJS offers some features that I've felt are lacking elsewhere. First and foremost: currying and partial application are front and centre. In all the libraries I've seen, it's up you to curry functions as you need to—funkierJS's API functions are curried from the start, allowing you to quickly build useful partially applied functions. Whenever you hand funkierJS a function, you're going to get a curried function back. Further, most expositions of currying in Javascript, if followed, will quickly lead to death by a thousand parentheses, turning a call of some 3-argument function f in to the tedious f(1)(2)(3). funkierJS provides what I call pragmatic curried functions. Only have 1 argument? Sure, I'll take it. But if you have more, then funkierJS's curried functions will take them and do the right thing. That earlier call could now be any of f(1)(2)(3), f(1, 2)(3), f(1)(2, 3), f(1, 2, 3).

funkierJS's curried functions accept that they don't exist in a pure mathematical paradise, and we live in a real world, full of grubby state. funkierJS provides various options for allowing your functions to handle state: the curry function will always call the underlying function with this set to null, a clear declaration that your function cares not for the state of the outside world. Sometimes you need to tiptoe onto the slippery slope: the bind currying method takes an object with your function. The resulting function has its this context forever fixed to the supplied object.

Sometimes you just need to go full JS. In a feature I believe to be unique to funkierJS, one can use a method I've christened "object-currying". This currying mechanism is designed for currying methods on object prototypes: each time you invoke an object curried method on a particular method, the execution context for the partial applications will be dynamically bound to the object that the method was invoked on. See the following (artificial) example:


  function F(x) { this.x = x; }

  F.prototype = { foo: funkierJS.objectCurry(function(y, z) { return this.x + y + z; }) }

  var obj1 = new F(1);
  var obj2 = new F(10);

  var partial1 = obj1.foo(2);
  var partial2 = obj2.foo(2);

  partial1(3); // 6 i.e. 1 + 2 + 3
  partial2(3); // 15 i.e. 10 + 2 + 3

The partial applications partial1 and partial2 automatically pick up the correct execution context, turning the object into a factory for partially applied functions. In fact, objectCurry can even be used to curry constructors invoked with new (though instanceof will report false for the partial applications created along the way).

Another frustration I encounter with other functional libraries is that, while they provide the basics, they still leave you too much work before they can prove really useful. funkierJS relieves the burden by providing curried implementations of most of the ES5 API (not just the functional-oriented parts): the functions you need should be there, ready to go without you having to put in the effort. While implementing this, I took the opportunity to smooth out the sharper edges - functions in the standard with optional parameters become different functions with different signatures (boy, was the plethora of Date constructors fun!). Anyone who has ever passed parseInt to Javascript's native array map won't be bitten by the same problem here.

Of course, adding all these functions made the API somewhat, er, plump. So many functions! To ease the pain somewhat, funkierJS has a built-in help function: pass it another funkierJS function, and you can get help right there in your REPL or web console, without having to pause to google API docs.

So, that is the essence of funkierJS. I think it is now ready for an initial release, although there are a couple of missing pieces—more below—and this initial release has concentrated on functionality not performance. Install it from npm, or visit the website.

Caveat emptor

There is one major problem going forward. I should warn you that having now released this, I'm going to be unresponsive to issues and pull requests. You see, my light tone in the paragraphs above has been an exercise in burying the lede. There's a problem.

Come Tuesday, June 9th, I will be homeless.

I don't want to dally too much on the reasons, though I'd quickly point out there's no addiction issues or anything of that nature at play. Just the nexus of economic misfortune, and my own beligerence and pride. The pride, at least, is certainly about to be offset with a grim lesson in humility.

We're not talking shacking up in a cheap hotel, or camping on friend's sofas, there's no happy endings ahead. I don't even have a car to camp in. This is full on, worst case scenario, life on the streets. It's somewhat surreal: today I sit here blogging, with access to food, drink, and even the frothiness of Twitter. In 48 hours, I won't even have shelter.

You'll forgive me then, I hope, that funkierJS is not quite as complete as I'd like, and you'll perhaps accept I'm about to go in to absent maintainer mode. I'm sorry. However, I hope you'll accept it as part of the legacy I leave behind of my time on the internet. It's not been a bad run: micropolisJS was, in retro-city simulator circles, a minor hit. My Mozilla contributions, though rarely high-profile, I hope improved the quality-of-life of Mozilla developers and end-users alike: from a mad idea that became a webapp now considered a fairly critical part of Mozilla's internal workflow, to little details like the server-side hooks that print out a handy link for pasting into bugs, to fixing at-the-time 10 year old bugs that bedevilled Mozilla luminaries, to find-in-page, to the unloved Editor code...they will endure during my enforced absence. I'm howling with rage at the mandatory opportunity cost being imposed. I had started on a clean-sheet reimplementation of V8Monkey that I now won't get to see through. Instead, I enter a world where Mozilla summits, pull requests, bugs, and even most components of ordinary life will be totally alien and far removed from my ongoing experiences. And, I am very, very frightened.

Hopefully I'll make it back in one piece. Look after the Web while I'm gone. I believe in the transformative power of technology, and the web in particular, although I rue that I didn't manage to transform my own life with it. There is a Paypal donation link if you're willing to help towards my survival out there, although after Tuesday it's unlikely that I'll be able to access Paypal, or indeed anything else. Perhaps things might turn around, and I'll see you someday at some tech event, and be able to tell you I survived. I'd sure like that.

Au revoir.

This Week In RustThis Week in Rust 82

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

In summary

It's been a few weeks since the last TWiR. Things have happened.

RustCamp 2015 was announced, though many details are still to be determined. Please submit talk proposals.

Several I/O stabilization patches have been nominated for backporting to 1.1, filling out the somewhat meager story we had for 1.0. With things moving somewhat slowly since the last release, I/O improvements will probably be the most interesting part of the next.

If you haven't recently, give the playpen another look as it's recieved several new features lately, including a new design from Chris Morgan, the ability to output Intel-flavor asm, automatically post GitHub gists, and run #[test] functions.

What's cooking on master?

346 pull requests were merged in the last three weeks.

Now you can follow breaking changes as they happen!

Breaking Changes

Other Changes

Approved RFCs

New RFCs

Internals discussions

New Contributors

  • Akshay Chiwhane
  • Alexander Artemenko
  • Alex Stokes
  • Andrew Foote
  • Austin Hellyer
  • benaryorg
  • Brian Quinlan
  • Christian Stadelmann
  • Chuck Bassett
  • clatour
  • Cornel Punga
  • David Campbell
  • David Voit
  • dmgawel
  • econoplas
  • edunham
  • Eric Ye
  • Erik Michaels-Ober
  • Felix S Klock II
  • funkill
  • Iven Hsu
  • Jexell
  • Kubilay Kocak
  • Lorenz
  • Marcus Klaas
  • Markus Westerlind
  • Matej Ľach
  • Mathieu David
  • Max Jacobson
  • Michael Layzell
  • Mohammed Attia
  • Nick Fitzgerald
  • Nils Liberg
  • OlegTsyba
  • olombard
  • Parker Moore
  • Paul Oliver
  • petrochenkov
  • Rein Henrichs
  • Rémi Audebert
  • Steve Gury
  • Thomas Karpiniec
  • Tim Ringenbach
  • webmobster
  • Will Andrews

Notable Links

Project Updates

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

"Dude, you can't just tell people it's the secret plan; then it won't be a secret any more! You keep this up, and you're going to get your Secret Rust Club card revoked! Then you won't be able to get on the awesome Secret Rust Zeppelin. Don't screw this up, man!"

Quxxy, from /r/rust.

"The 1st rule of Secret Rust Club is: you don't talk about Secret Rust Club.

The 2nd rule of Secret Rust Club is: error: 1st rule does not live long enough.

error: aborting due to previous error"

JakDrako, from the same thread.

Thanks to drbawb and Manishearth for the tip. Submit your quotes for next week!.

Raniere SilvaMathml May Meeting

Mathml May Meeting

Note

Sorry for the delay in write this post.

This is a report about the Mozilla May IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

The next meeting will be in June 10th at 8pm UTC (check the time at your location here). Please add topics in the PAD.

Leia mais...

Robert O'CallahanSmall Change To rr Behavior

To address issue 1490, I just checked in an rr improvement that changes the behavior of gdb+rr in a user-visible way. When the last thread of the debuggee process exits, rr generates a fake SIGKILL signal before exiting the process. This gives the user a chance to reverse-execute from the end of the debuggee execution, when previously rr would have let the process terminate normally so reverse execution cannot occur. The new behavior is a little bit of a lie since in most cases SIGKILL was never actually sent, but I hope the usefulness of this new behavior outweights any possible confusion.

Mike ConleyThe Joy of Coding (Ep. 17): Frustrations in the Key of C++

In this episode, I gave a quick update on the OS X printing bug we’d been working on a for a few weeks (Spoiler alert – the patch got reviewed and landed!), and then dove into my new problem: getting performance profiles from subprocesses asynchronously.

And, I won’t lie to you, this is probably the most frustrating episode in the series so far. I really didn’t make much headway.

The way I want to solve this problem involves passing a DOM Promise back to the Javascript caller that resolves when all of the profiles have been gathered asynchronously.

If I were writing this in Javascript, it’d be a cinch. Creating, passing around, and resolving Promises is pretty straight-forward in that world.

But the Gecko profiler backend is written entirely in C++, and so that’s where I’d have to create the Promise.

A few weeks back, I posted a “Things I’ve learned this week” about how to create DOM Promises in C++. That’s all well and good, but creating the Promise is only half of the job. You have to resolve (or reject) the Promise in order for it to be useful at all.

The way I wanted to resolve the Promise involved parsing a JSON string and resolving with the resulting object.

That turned out to be a lot harder than I thought it’d be. Watch the video to see why. Suffice it to say, I spend a lot of it asking for help in IRC. It’s a 100% accurate demonstration of what I do when I’m lost, or can’t figure something out, and I need help.

Since I recorded this episode, I’ve figured out what I needed to do – I’ve posted a “Things I’ve learned this week” containing that information. Hopefully that’ll help somebody else in the future!

Oh – also, this episode has sound effects, courtesy of Wacky Morning DJ (which I demonstrated in last week’s episode).

Episode agenda.

References

Bug 1116188 – [e10s] Stop using sync messages for Gecko profilerNotes

Julien PagèsPython code cleanup – vulture

I’m pretty sure you all already know about flake8, a great tool that combine other tools (PyFlakes, pep8 and Ned Batchelder’s McCabe script) to check style issues and code errors.

Working on Talos, I needed to find a way to find dead code on the codebase (the less code, the better!). I found a pretty good tool that helped me: vulture. This is a one shot tool – it can not be automated like flake8 as there is some false positives, but still running it once on an old codebase may be a big win to find and remove useless stuff.

Obviously it is easier to use on python programs – you will have to whitelist a lot of things if you run this on a python library that offers a public API.

If you want to seek and destoy dead code on your python program, give it a try!


John O'DuinnRelEngCon 2015: Call for papers

Preparations for RelEngConf 2015 are officially in full swing. This means two things:

1) RelEngCon 2015 is now accepting proposals for talks/sessions. If you have a good industry-related or academic-focused topic in the area of Release Engineering, please have a look at the Release Engineering conference guidelines, and submit your proposal before the deadline of 23-jan-2015.

2) Both RelEngCon 2014 and RelEngCon 2013 were great. The mixture of attendees and speakers, from academia and battle-hardened industry, made for some riveting topics and side discussions. Its too early to tell who exactly will be speaking in 2015, but its not too early to start planning your travel to Florence, Italy!! Also of note: RelEngCon 2015 will be just a few weeks after the publication of IEEE 1st Special Issue on Release Engineering. Looks like RelEngConf 2015 is going to be special also.

For further details about the conference, or submitting proposals, see http://releng.polymtl.ca/RELENG2015/html/index.html. If you build software delivery pipelines for your company, or if you work in a software company that has software delivery needs, I recommend you follow @relengcon, block off May 19th, 2015 on your calendar and book now. It will be well worth your time.

See you there!
John.

John O'Duinn“The Race for Space” by Public Service Broadcasting

I was happily surprised by this as a gift recently.

For me, the intermixing of old original broadcasts with original composition music worked well as an idea. Choosing which broadcasts to include was just as important as composing the right music.

I liked how the composers framed the album around 9 pivotal moments events from 1957 (launch of sputnik) to 1972 (Apollo 17, the last Apollo departing the moon). Obviously, there was a lot of broadcasts to choose from, and I liked their choices – some of which I’d never heard before. Kennedy’s “We choose to go to the moon” speech, a homage to Valentina Tereshkova (the first female in space), Apollo 8’s “see you on the flip side” (the earthrise photo taken by Apollo 8 is still one of my favourites), and the tense interactions of all the ground + flights teams in the final seconds of descent to land of Apollo 11 (including handling the 1202, 1201 errors!).

All heady stuff and well worth a listen.

John O'DuinnA new focus

I’ve just added two new categories (“Release Engineering” and “Startup”) to my blog. This reflects the new reality of my life.

Obviously, many of my existing posts are already about Release Engineering, an area I care deeply about, yet somehow I just never flagged them correctly – I’ll fix that. The bigger news is about “Startup”. A few months ago, I decided to take the plunge and actually do what I’ve been talking about for years – start my own startup.

Since then, every day has been really busy, exciting, scary and fun – sometimes even all on the same day! Finding a bug in some AWS API documentation. Reading legal contracts with a highlighter and having to stop to Google some of the terms. Getting phone calls from a stranger that start with “you don’t know me, but I got your name from xxx and I hope you can help…”. Saying “what could possibly go wrong” multiple times a day. Joking about “pay no attention to the man behind the green curtain” while preparing a demo. Politely declining a job offer from a cold call recruiter, hanging up, taking a deep breath, calmly reminding myself that 9 out of 10 startup fail, and then jumping back into the fray. Oddly enough, I find I’m sleeping more, and feeling less stressed!? So far. Oh, and I’m drinking even more coffee then usual (yes, that is possible!).

Things are still under wraps, but as soon as there’s something worthwhile to show or talk about, I’ll post here on my blog.

In addition to the PRODUCT of the startup, I’ll also be blogging about the PROCESS of creating the startup. Technical, business, human aspects… warts and all. I’ve found it really helpful, and encouraging, to read posts from other founders and investors who’ve gone before me, on what they learned while building a startup – not just airbrushed niceties but also the genuine good/bad/funny/horror/irreverent/snafu stories that people have posted about life while building a startup. Some I’ve nodded along with, say “that was obvious”. Some I’ve re-read multiple times carefully and made mental notes. All are honest and helpful – to me and I’m sure many many others also. In that “pay it forward” spirit, I’ll make time to blog about this, and hopefully others starting their own entrepreneurial path will find these posts helpful – in a “oh, that is clever, I should make sure to do that” way… or “oh boy, I need to make sure to *never* do that” way… or somewhere in between!

I have to say I feel incredibly lucky with the support and encouragement of friends, family, former-co-workers and others I’ve bumped into over the years. Not just generic “don’t worry – it will be fine” support. Even with best of intentions, telling people what you think they want to hear, even when you think it may not be a good idea, is not good – it can set someone up to fail. Instead, I’ve been getting really helpful, informed, constructive support and advice like “maybe if you change it to…” or “have you asked xxx, she might have an insight…” or “that was good, don’t change that” or “… ok, that didn’t go well, so how will you do better next time?” Sometimes hard to hear, but always true from the heart and totally honest. This support is priceless, and means a great deal to me, so I find myself listening very carefully and humbly thanking people a LOT.

John O'Duinn“RelEng as a Force Multiplier” at RelEng Conf 2015

Last week, I was honored to give the closing talk at RelEng Conf 2015, here in Florence, Italy.

I’ve used this same title in previous presentations; the mindset it portrays still feels important to me. Every time I give this presentation, I am invigorated by the enthusiastic response, and work to improve further, so I re-write it again. This most recent presentation at RelEngConf2015 was almost a complete re-write; only a couple of slides remain from the original. Click on the thumbnail to get the slides null

The main focus of this talk was:
1) Release Engineers build pipelines, while developers build products. When done correctly, this pipeline makes the entire company more effective. By contrast, done incorrectly, broken pipelines will hamper a company – sometimes fatally. This different perspective and career focus is important to keep in mind when hiring to solve company infrastructure problems.

2) Release Engineers routinely talk and listen with developers and testers, typically about the current project-in-progress. Thats good – we obviously need to keep doing that. However, I believe that Release Engineers also need to spend time talking to and listening with people who have a very different perspective – people who care about the fate of the *company*, as opposed to a specific project, and have a very different longer-term perspective. Typically, these people have titles like Founder/CxO/VP but every company has different cultural leaders and uses slightly different titles, so some detective work is in order. The important point here is to talk with people who care about the fate of the company, as opposed to the fate of a specific project – and keep that perspective in mind when building a pipeline that helps *all* your customers.

3) To illustrate these points, I then went into detail on some technical, and culture change, projects to highlight their strategic importance.

As usual, it was a lively presentation with lots of active Q+A during the talk, as well as the following break-out session. Afterwards, 25 of us managed to find a great dinner (without a reservation!) in a nearby restaurant where the geek talk continued at full force for several more hours.

All in all, a wonderful day.

It was also great to meet up with catlee in person again. We both had lots to catch up on, in work and in life.

Bram Adams, Christian, Foutse, Kim and Stephany Bellomo all deserve ongoing credit for continuing to make this unusual and very educational conference come to life, as well as for curating the openness that is the hallmark of this event. As usual, I love the mix of academic researchers and industry practitioners, with talks alternating between industry and academic speakers all day long. The different perspectives, and the eagerness of everyone to have fully honest “what worked, what didnt work” conversations with others from very different backgrounds is truly refreshing… and was very informative for everyone. I’m already looking forward to the next RelEngConf!!

[updated to fix some typos. John 28may2015]

John O'DuinnInterviews about “Work from home” policies at Facebook, Virgin and yes, Yahoo!

When I talk about “remoties”, I frequently get asked my thoughts on Yahoo’s now (in)famous “no more work-from-home” policy.

Richard Branson (Virgin, link to first video) and the separate comments from Jackie Reses (Yahoo, 2.27 into the link to second video) confirm what I’d heard from multiple unofficial mutterings – that Yahoo’s now (in)famous “no more work from home” decree was actually intended as a way to jolt the company culture into action.

I also liked Sheryl Sandberg (Facebook) comments about how a successful remote workplace depends on having clear measures of successful results. Rather then valuing someone by how many hours they are seen working in the office, instead it is better to have a company culture where you measure people by results. This echoes comments I’ve seen from Jason Fried in his “Remote” book, comments I’ve made in my “we are all remoties” presentations and which I’ve heard again and again from various long-term remote workers.

These two interviews discuss these points really well. The entire article is well worth a read, and both videos are only a few minutes long, so worth the quick watch.

Encouraging!

Stuart ColvilleTips for building a dev-env with docker

The docker logo

Using docker and docker-compose to run a development environment can be a good way to have all your services connected and running together.

I'm going to assume you've got a basic understanding of docker and how it works if not there's a good overview of what docker is here.

We're currently using docker-compose alongside docker to tie together several services for the Mozilla payments development environment.

Having used docker-compose (née fig) on the marketplace development environment. We learnt a lot about what did and didn't work.

Two of the biggest issues we hit were:

  • Dependencies and dep updates
  • Front-end tools

Handling deps offline

Note: I've written about this problem before in detail: Docker and dependencies

If you're running a development environment it's likely you'll be running pip or npm in the Dockerfile. The problem here is that installing all those deps is one layer of the cache. As soon as you update the requirements.txt or package.json that step needs to run again - if you're running the build yourself that can be painful.

I think there's still room for a better solution, but the best way to avoid deps becoming a problem, is to use the hub to build your docker images from a branch in github.

This way you can just pull new images to update your entire environment and they have the latest deps. If you need a new dep, you can just manually install it on the running container (see docker-exec) and when you commit the deps file update the newly built image will have your new dep in it.

Front end toolchain

The next problem that was a big one for us was performance of front-end tooling in a docker container. If you're running anything other than Linux then you'll likely be running boot2docker in virtualbox. The problem with Virtualbox is that you're probably sharing the code into the vm with vboxfs. Unfortunately this doesn't provide the low-level file-change notifications into the vm from the host. So things that we typically use for front-end to watch for file-changes tend to work very slowly and cause virtualbox to eat CPU. From my experience NFS is better from a perf point of view, but still not great.

The trick to solving this one is again to leverage building images on the docker hub. Most of your front-end code is going to just be creating static files. So from your dev-env's perspective you just need to provide the files and serve them from your favorite web server e.g. nginx. The other neat thing is, if you do this right, you're not going to need to remember to commit built files to your tree. \o/

The way we do this is with a data volume container (or data-only container) that just contains the files. For developing locally we do a switcheroo and point that volume to our local source code in the docker-compose.yml.

All the file-watching tools (grunt/gulp and friends) and npm deps are installed and run on the host not in a container.

To have the docker env still work out of the box, we set-up hooks on travis to publish the built files (when the tests have successfully passed) to a different branch and the docker hub builds the image from that.

In summary

When things don't work you have to go back to the drawing board. Sometimes if you feel like you're fighting a tool then you're probably doing it wrong ™.

In our case leveraging the docker hub has made a real difference, Thanks to @AndyMckay for pushing us in that direction.

All in all I think we're starting to think more like we would for using docker in production. Which is not a bad thing, and it would mean that going from having a working development environment to something that could be running in prod would be less of a leap.

I've glossed over a lot of details - so if you'd like to know more take a look at the payments repos or let me know in the comments.

Andreas GalNew Adventure

It is with mixed emotions that I end my almost 7 year long journey with Mozilla next week. Working with this team has been one of the peak experiences of my professional life.

I am also extremely excited about the next chapter in that life. I am departing Mozilla to create a new venture in the Internet of Things space, an open field that presents many of the types of challenges and opportunities that drive our passion for the Web.

I feel deeply humbled and honored that I had the chance to be part of such an amazing and passionate group of people for the last many years, building together the Web we want. I leave with fond memories and great respect for this organization and the people who build it each day. It has been a great honor to be your colleague and friend.


Filed under: Mozilla

Steve FinkFirefox directions

Some time back, I started thinking about what Firefox could do for me and its other users. Here are my thoughts, unashamedly biased and uninformed. If you don’t think something here is an awful idea, either I’ve failed or you aren’t reading closely enough.

Mozilla-specific advantages

Mozilla provides Firefox with a unique situation. We are not directly compelled monetize Firefox or any other product. We do not need to capture users or wall them in. That doesn’t mean we *don’t* want to make money or gain market share — eventually, we need both, or Mozilla’s influence dies — but we have a degree of freedom that no other big player possesses.

Privacy and User Sovereignty

We can afford to give our users as much privacy as they might want. If you ask the vast majority of users about this, I suspect they’ll think it sounds like a theoretically good thing, but they don’t know what it really means, they don’t know what Firefox can do for them that other browsers don’t, and they don’t have any strong reason to care. All three of those are true of me, too.

Let’s tell them. Make an about:privacy that shows a chart of privacy features and behaviors of Firefox, Mozilla and its servers, and our major competitors. Give specific, realistic examples of information that is transmitted, what it might be used for (both probably and far-fetched). Include examples from jealous boyfriends and restrictive regimes. Expose our own limitations and dirty laundry: if this server gets hacked or someone you don’t like has privileged access, they will see that you crashed on youporn.com. It won’t all fit on a page, but hopefully some impactful higher-order bits can be presented immediately, with links to go deeper. Imagine a friendly journalist who wants to write an article claiming that Firefox is the clear choice for people who care about controlling their personal data and experiences on the web, and provide all the ammunition they’d need to write a convincing article. Don’t bias or sugarcoat it — using Firefox instead of Chrome is going to protect very little from identity theft, and Google has a lot more resources dedicated to securing their servers than we do.

If possible, include the “why”. We won’t track this data because it isn’t useful to us and we err on the side of the user. Chrome will because it’s part of their business model. Mention the positive value of lock-in to a corporation, and point out just how many sources of information Google can tap.

Point to Lightbeam. Make cookies visible — have them appear temporarily as floating icons when they are sent, and show them in View Source. Notify the user when a password or credit card number is sent unencrypted. Allow the user to delete and modify cookies. Or save them to external files and load them back in. Under page info, enumerate as much identity information as we can (show what the server can figure out, from cookies to OS to GL capabilities.)

Gaming

I don’t know if it’s just because nobody else needs to care yet, but it seems like we have a lead on gaming in the browser. It’s an area where players would be willing to switch browsers, even if only temporarily, to get a 10% higher frame rate. Until rich web gaming starts capturing a substantial number of user hours, it doesn’t seem like the other browser manufacturers have a reason to care. But if we can pull people out of the extremely proprietary and walled-off systems that are currently used for gaming, and get them onto the open web, then not only do we get a market share boost but we also expand the range of things that people commonly do on our open platform. It’ll encourage remixing and collaboration and pushing out the envelope, counteracting the dull descent into stupefied consumption. The human race will develop more interconnections, and gain a richer culture for the Borg to destroy when they finally find us.

Er, sorry. Got a little carried away there. Let me just say that gaming represents more than pathetic self-indulgence. It is communication and education and culture development and improved government. You’ll never truly understand how the US won its war for independence until you’ve lived it — or at least, simulated it. (Hint: it’s not because our fighters had more hit points.)

Addons

Addons are a major differentiator for Firefox. And most of them suck. Even ignoring the obvious stuff (malware, adware, etc.), for which it seems like plans are in motion to combat, it still seems like addons aren’t providing the value they could be. People have great ideas, but sadly Chrome seems to be the main beneficiary these days. Some of that is simply due to audience size, but I don’t think that’s all of it.

I know little about addons, but I have worked on a few. At least for what I was doing, they’re a pain to write. Perhaps I always just happen to end up wanting to work on the trickiest pieces to expose nicely, but my experience hasn’t been pleasant. How do you make a simple change and try it out? Please don’t make me start up (or restart) the browser between every change. What’s the data model of tabs and windows and things? What’s the security model? I periodically try to work on a tab management extension, but everything I try to do seems to end up getting silently ignored, probably because it doesn’t have the right privileges. I asked questions at the last Summit, but the answers were *complicated*, and incomprehensible to someone like me who is unfamiliar with how the whole frontend is put together. And why isn’t there straightforward code that I can read and adapt? It seems like the real code that drives the browser looks rather different from what I’d need for my own addon. Why didn’t it work to take an existing addon, unpack it, modify it, and try it out? Sure, I probably did something stupid and broke something, but the browser wasn’t very good at telling me what.

That’s for complicated addons. Something else I feel is missing is superlightweight addons. Maybe Greasemonkey gives you this; I’ve barely used it. But say I’m on a page, or better yet a one-page app. I want something a little different. Maybe I want to remove a useless sidebar, or add a button that autofills in some form fields, or prevent something from getting grayed out and disabled, or iterate through an external spreadsheet to automatically fill out a form and submit it, or autologin as a particular user, or maybe just highlight every instance of a certain word. And I want this to happen whenever I visit the page. Wouldn’t it be great if I could just right-click and get an “automatic page action” menu or something? Sure, I’d have to tell it how to recognize the page, and it might or might not require writing JavaScript to actually perform the action. But if the overhead of making a simple addon could be ridiculously low, and it gave me a way of packaging it up to share with other people (or other computers of mine), it could possibly make addons a lot more approachable and frequently used. It would also be an absolute disaster, in that everyone and his dog would start writing tiny addons to do things that really shouldn’t be done with addons, but so be it. Think of something easy enough to be suggested in a 1-page support document as a workaround for some site functionality gap. Even better, I’d like the browser (or, more likely, this addon-generating addon) to automatically do version control (maybe auto-uploading to github or another repo), and make it easy to write self-tests and checks for whether the required page and platform functionality is still present.

Addons also don’t feel that discoverable. I can search by name, but there’s still an element of guessing as to how serious (stable, maintained, high quality) an addon is. It turns my stomach to say this, but I kind of want a more social way of browsing and maintaining lists of addons. “People who are messed up in ways similar to you have left these addons enabled for long periods of time without disabling them or removing them in a fit of anger: …” Yes, this would require a degree of opt-in tracking evil, but how else can I and my true brethren avoid the polluted mindset of the godless vi-using heathens?

Hey, remember when we pissed off our addon authors by publicly shaming them with performance measurements? Could we do something similar, but only expose the real dirt after you’ve installed the addon?

Which brings me to addon blaming. It’s too hard to blame a misbehaving addon, which makes me too conservative about trying out addons. If I had a “Why Does My Firefox Suck Right Now?” button that popped up “because addon DrawMoustachesOnCatPictures is serializing all of your image loads”, I’d be more willing to experiment. Ok, that’s probably too hard — how about just “addon X is eating your CPU”?

Why *Does* My Firefox Suck Right Now?

On a related note, I think a big problem is that Firefox sometimes behaves very badly and the user doesn’t know why. We really need to get better at helping people help themselves in diagnosing these problems. It feels like a shame to me when somebody loves Firefox, but they start running into some misbehavior that they can’t figure out. If we’re really lucky, they’ll try the support forums. If that doesn’t work, or they couldn’t be bothered in the first place, they come to somebody knowledgeable and ask their help. The user is willing to try all kinds of things, install diagnostic tools, email around attachments of log files or whatever — but as far as I can tell these things are rarely useful. We’re not very good at gathering enough data to track the problem down. A few things serve as illustrative counterexamples: restarting in safe mode is enormously helpful, and about:memory can pinpoint problems on its own. Theoretically, the profiler ought to be good for diagnosing slowdowns and hangs, but I haven’t gotten much out of it in practice. (Admittedly, my own machine is Linux, and the stackwalking has never worked well enough here. But it’s been far from a golden bullet for my friends’ Windows machines either.)

These are the sort of situations where we are at high risk of losing users. If a PC runs Sunspider 5% faster but opening a new tab mysteriously takes 5 seconds, somebody’s going to switch browsers. Making the good case better is far less impactful than eliminating major suckage. If somebody comes to us with a problem, we should have a very well-worked out path to narrow it down to an addon or tab or swapping or networking or virus scanning or holy mother of bagels, you have *how* many tabs open?! Especially if IE and Chrome do just fine on the same computer (empty profiles or not.)

Browsing the F*ing Web

That’s what Firefox is for, right? So I have some problems there too. What’s the deal with tabs? I like opening tabs. It means I want to look at something. I’m not fond of closing tabs. I mean, it’s fine if I’m done with whatever I was looking at. But that’s only one tab. Closing any other tab means I have to stop what I’m doing to think about whether I need the tab. It’s like picking up trash. I’m willing to accept the necessity in real life, but in a computer-controlled virtual space, do I really have to?

Sadly, that means a tab explosion. Firefox is good about allowing it to happen, but pretty crappy at dealing with the inevitable results. I know lots of people have thought hard on how to improve things here, but I haven’t seen any solutions suggested that felt exactly right. And I don’t have one either, but I’ll propose things anyway.

Tabs vs bookmarks vs history is artificial. They’re all stuff you wanted at some point, some of which you want now, and some of which you’ll want again in the future. I want to open tabs forever without ever worrying about closing any, and I want the interface to only display the tabs I might conceivably be interested in right now. Bookmarks are just tabs that I claim I might want again in the future, but I don’t want to clutter up my tab bar with right now. History additionally has all the tabs that I claim I don’t need to see again, except maybe sometime when I remember that I’ve seen something before and need it again.

Yes, I am misusing “tabs” to mean “web pages”. Sue me.

So. Let me have active tabs, collected in some number of windows, preferably viewable on the left hand side in a hierarchical organization a la Tree Style Tabs. Give me buttons on the tabs to quickly say “I don’t care at all about this anymore”, “categorize for when I want to browse by topic and find it again”, “queue this up for when I am avoiding work” (perhaps in a named queue), and “I will only want this cluttering my screen as long as these other tabs are still visible”. (Those correspond to “close tab”, “bookmark tab”, “enqueue tab”, and “reparent tab”.) Allow me to find similar tabs and inform the browser about those, too. Right-clicking on a bugzilla tab ought to give me a way to find all bugzilla tabs and close them en masse, or reparent them into a separate group that I can scan through, enqueue some, and then close the rest as a group. I should be able to sort all the tabs by the last time I looked at them, so I can kill off the ancient ones.

Some context: I have a lot of tabs open. Many more than fit on one screen (even using Tree Style Tabs.) Cleaning them up is a pain because the ones early in the list are things that I’ve had around for a long time and resisted closing because they’re useful or I really really want to get around to reading them. The ones late in the list are recently open and likely to be current and relevant. The stuff in the middle is mostly crap, and I could probably close a hundred in a minute or two, except the tab list keeps jumping around and when I click in the middle I keep having to wait for the unloaded pages to load just so I can kill them.

I want to throw those ancient tabs in a “to read” queue. I want to find all of my pastebin tabs and kill them off, or maybe list them out in age order so I can just kill the oldest (probably autodeleted) ones. I don’t want the “to read” queue in my active tab list most of the time, but I want to move it in (probably via drag & drop) when I’m in the mood. I want to temporarily group my tabs by favicon and skim through them, deleting large swathes. I want to put the knot-tying tab and origami instruction tab into a separate “to play with” folder or queue. I want to collect my set of wikipedia pages for Jim Blandy’s anime recommendations into a group and move them to a bookmark bar, which I may want to either move or copy back to the active tab list when I’m ready to look at them again. I want to kill off all the bugzilla pages except the ones where I’ve entered something into a form field. I want to skim through my active tab list with j/k keys and set the action for each one, to be performed when I hit the key to commit to the actions. I want undo. I want one of those actions, a single keystroke, to set which window the tabs shows up in the active tabs list. I want to sort the tabs by memory usage or CPU time. I want to unload individual tabs until I select them again.

I want a lot of stuff, don’t I?

Here is the place I originally intended to start talking about loaded and unloaded tabs, the perils and advantages of auto-unloading, and all of that, but:

This Post

I just checked the timestamp on this post. I wrote it on August 14, 2014, and have sat on it ever since. It’s been waiting for the day that I’ll finally get around to finishing it up, perhaps splitting it into several blog posts, etc. Thanks to Yoric’s Firefox reimaginings I came back to look, and realized that what’s going to happen is that this will get old and obsolete enough to just die. I’d be better off posting it, rough and incomplete as it is. (And I *still* haven’t watched those anime. Where has time gone?!)

Whoa. But I just looked at the preview, and this post is *long*. Sorry about that. If I were a decent human being, I would have chopped it up into digestible pieces. I guess I just don’t like you that much.

Mozilla Reps CommunityReps Weekly Call – June 4th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

foxyeah

Summary

  • FoxYeah campaing.
  • Youth Mozilla mentorship program.
  • New Mozilla Indonesia Community Website
  • Featured Events.
  • Help me with my project.

AirMozilla video

Detailed notes

Shoutouts to the Reps who helped at Festival TIK 2015 in Bandung, Indonesia: @fauzanalfi, Shinta, @rara, Fadhil, Rizky Ariestiyansyah, Bimo, @tampubolonbudiman and Eriska.

FoxYeah campaing

@Chelsea joined the call to talk about this new campaign that is now based in US but will be expanded to Europe and Asia soon. Even if you are not in the US you can participate sharing the “keep it personal” message and checking youtube videos.

You can create #foxyeah content even in your language asking people to download Firefox, just make sure you share it with the #foxyeah hashtag.

You can submit any question to @Chelsea.

FoxYeah presentation.

Youth Mozilla mentorship program

@Costenslayer introduced the Youth Mozilla program that aims to get young people to Mozilla for the first time and show them how to be involved. The idea is to connect them to functional areas and get mentors.

You can get more information from this blog post.

New Mozilla Indonesia Community Website

@yofiesetiawan introduced the new website of Mozilla Indonesia, developed in Joomla! and now responsive!

The site is in English and Bahasa Indonesia and they hope this to be the central place of information for every Mozilla activity in the country.

Featured events

These are some of the events that took place last week.

  • Bangladesh Community Meetup – Dhaka – 5th-6th
  • Community Pune Meetup – Pune, India – 6th
  • Vancouver Maker faire – Vancouver, Canada – 6th
  • Surveillance 101 – Skojpe, Macedonia – 7th
  • Mozilla Day IFRS – Canoas, Brazil – 9th

Do you want to get more visibility for your events? Check how!

Help me with my project!

In this section the floor is yours to present in 1 minute a project you are working on and ask other Reps for help and support.

@franc is looking for help with the “External experts” project to identify local organizations that do a great job in participation and Mozilla can learn from them. Ping Francisco if you want to help.

Raw etherpad notes.

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Botond BalloTrip Report: C++ Standards Meeting in Lenexa, May 2015

Summary / TL;DR

 

Project What’s in it? Status
C++14 C++14 Published!
C++17 Various minor features. More substantial features under consideration include default comparisons and operator . On track for 2017
Networking TS Sockets library based on Boost.ASIO In late stages of design review
Filesystems TS Standard filesystem interface Published!
Library Fundamentals TS I optional, any, string_view, and more Voted out for publication!
Library Fundamentals TS II source code information capture, array_view, and more Expected 2016 or 2017
Array Extensions TS Old proposals (arrays of runtime bound (ARBs) and dynarray) abandoned. New direction being explored: a magic type that acts like an ARB and can be used as a local variable only.
Parallelism TS Parallel versions of STL algorithms Voted out for publication!
Concurrency TS improvements to future, latchesand barriers, atomic smart pointers Voted out for balloting by national standards bodies
Transactional Memory TS transaction support Voted out for publication!
Concepts (“Lite”) TS constrained templates In process of addressing comments from national standards bodies. Publication expected late 2015.
Reflection Code introspection and (later) reification mechanisms Still in the design stage, no ETA
Graphics 2D drawing API Standard wording expected to be ready for review at the next meeting.
Modules A component system to supersede the textual header file inclusion model Microsoft and Clang continuing to iterate on their implementations and converge on a design. Ship vehicle (C++17 or a TS) uncertain.

Introduction

Last week I attended a meeting of the ISO C++ Standards Committee in Lenexa, Kansas. This was the first committee meeting in 2015; you can find my reports on 2014’s meetings here (February, Issaquah), here (June 2014, Rapperswil), and here (November, Urbana-Champaign). These reports, particularly the Urbana one, provide useful context for this post.

The focus of this meeting was iterating on the various ambitious proposals in progress, and beginning to form an idea of which of them will be ready for C++17. In addition, several of the Technical Specifications (TS) in flight have reached their final stages in the committee, while work continues on others.

C++14

C++14 was officially published as an Internal Standard by ISO in December 2014. Its timely publication is evidence that the committee’s plan for adopting a three-year publication cycle post-C++11 has, thus far, been successful.

C++17

What Will Make It In?

When the committee originally announced, shortly after C++11 was published, the schedule for the next two standard revisions, C++14 was described as a “minor revision”, and C++17, a “major” one.

A few things have happened since then:

  • C++14 ended up being not so minor, with fairly substantial features like generic lambdas and variable templates making it in.
  • For features not going through the TS process, there’s no intrinsic reason the three years between C++14 and C++17 would be any more or less productive for the committee than the three years between C++11 and C++14.
  • For features going through the TS process, that process has in some cases been taking somewhat longer than initially envisioned, and thus it’s not clear whether they will be published in time – and particularly, if there will be enough time after their publication to get more implementation and use experience with them – to merge them into C++17. I would say the following TS’s stand a chance of being merged into C++17:
    • Filesystems, which has already been published
    • Library Fundamentals I, Parallelism, and Transactional Memory, which were voted for publication at this meeting
    • Concepts, which is expected to be voted for publication in a post-meeting telecon, or at the next meeting (in October in Kona, Hawaii) at the latest

    However, I don’t think any of these mergers is a given. For example, there is some opposition to merging the Filesystems TS into the Standard in its current form on the basis that its design isn’t a good fit for certain less common varieties of filesystems. Concepts also has some pending design issues that may be revisited prior to a merger. In all cases, feedback from implementors and users will be key and will likely decide the TS’s fate.

As a result, I think it’s likely that C++17 will end up being approximately as “major” or “minor” of a revision as C++14 was.

Modules making it into C++17 is less likely. Due to the significance of the change that they bring to the language, there have been calls – including by the authors of one of the two current modules implementations – to pursue the feature initially as a Technical Specification, giving the design the opportunity to be hardened via implementation and use experience. If this avenue is pursued, the timing effectively rules out such a TS being published and merged into the IS by C++17. That said, from a user’s perspective, I don’t think this is any reason to despair: the feature is still likely to become available to users in the 2017 timeframe, even if it’s in the form of a TS.

Reflection is still very much in the design stage, and SG 7 indicated a preference for a TS as the initial ship vehicle. C++17 is practically out of the question.

(I talk about modules and reflection in more detail below, for those interested.)

That said, there are features not going through the TS process which are expected to be in C++17.

What Has Made It In?

My Urbana report lists the language and library features that have made it into C++17 as of the last meeting.

No new language features were voted into C++17 this meeting, in the sense of standard wording for them being merged into the C++17 working draft (a small, language lawyer-ish change, making exception specifications part of the type system came close, but was deferred to Kona due to some concerns brought up during the closing plenary session). However, there are numerous language features in the design or wording review stages that are expected to be voted into C++17 in upcoming meetings; I talk about these in the “Evolution Working Group” section below.

There were some small library features voted into C++17 at this meeting:

Technical Specifications

Procedural changes

There has been a change to ISO’s procedure for publishing a Techincal Specification since the last meeting. The process used to involve two rounds of balloting by national standards bodies, called PDTS (Preliminary Draft TS) and DTS (Draft TS). Recently, the DTS round has been dropped, leaving just the PDTS round, and making for a more agile TS publication process.

Crossed the finish line!

As a result of this procedural change, some TS’s which had successfully finished their PDTS ballots became eligible to be voted out for publication at this meeting, and after addressing PDTS ballot comments, vote them out we did! Library Fundamentals I, Transactional Memory, and Parallelism I have all been sent to ISO for official publication, which should happen within a few months.

A couple of other TS’s haven’t quite crossed the finish line yet, but are very close.

Concepts

The Concepts TS garnered a rich and opinionated collection of PDTS ballot comments. Among them were your usual editorial and wording-level technical comments, but also some design-level comments which were brought before the Evolution Working Group (EWG) for consideration.

It’s rare for the committee to make design changes to a TS or Standard at such a late stage in the publication cycle, and indeed most design-level comments were deferred (meaning, they will not be addressed in this version of the TS, but they could be revisited in a future version, or if the TS comes up for merging into the Standard). One comment, however, which was essentially a request for a (small) feature, was approved. The feature will allow using a concept name as type name in a variable declaration:

ConceptName var = expression;

The semantics is that the type of var is deduced from the type of expression (much like with auto), but the code is ill-formed if the deduced type does not satisfy the concept.

I was mildly surprised that EWG was willing to approve this addition at this late stage, but pleasantly so: I think this feature is very useful. To top off the good news, Andrew Sutton (the primary author of the Concepts TS), who couldn’t make it to the meeting itself, reported only two days later that he added support this feature in his GCC-based Concepts TS implementation! (Implementation experience is immensely valuable for guiding the committee’s decisions, because issues with a proposal feature often come up during implementation.)

As a result of this new addition, and a multitude of wording-level comments, the Core Working Group (CWG) didn’t have time to prepare final wording for the Concepts TS by the end of the meeting, so it couldn’t be voted out for publication just yet. Rather, CWG plans to hold a post-meeting teleconference to (hopefully) complete the final wording, after which the options are to hold a committee-wide teleconference to vote it out for publication, or to wait until Kona to vote on it.

Either way, the Concepts TS is practically at the brink of completion! Very exciting.

There’s also good news on the implementation side: GCC front-end developer Jason Merrill says Andrew’s Concepts implementation is expected to merge into mainline GCC within a month or so. Meanwhile, IBM, who have switched to using clang as the front-end for their newer products, announced their intention to kick off a clang-based implementation.

Concurrency I

Good progress here, too: the first Concurrency TS was sent out for its PDTS ballot! Assuming a successful ballot, it should be ready to be voted for publication in Kona.

Evolution Working Group

As usual, I spent most of the meeting in the Evolution Working Group, which does design review for proposed language features. EWG once again had a full plate of proposals to look at.

Recapping from previous posts, the outcome of an EWG design review is one of the following:

  • Approved. The proposal is approved without design changes. They are sent on to the Core Working Group (CWG), which revises them at the wording level, and then puts them in front of the committee at large to be voted into whatever IS or TS they are targeting.
  • Further Work. The proposal’s direction is promising, but it is either not fleshed out well enough, or there are specific concerns with one or more design points. The author is encouraged to come back with a modified proposal that is more fleshed out and/or addresses the stated concerns.
  • Rejected. The proposal is unlikely to be accepted even with design changes.

Here’s how this meeting’s proposals fared:

Accepted:

  • A proposal to make emplace_back() and similar play nice with aggregate types. Note that this is a library proposal for which EWG input was solicited, so it was sent to the Library Working Group (LWG) rather than CWG.
  • Microsoft’s resumable functions (a.k.a. coroutines) proposal. More about this in the “Coroutines” section below.
  • A proposal to make exception specifications part of the type system. This resolves a long-standing issue where exception specifications sort-of contribute to a function’s type but not quite, and as such their handling in various contexts (passing template arguments, conversions between function pointer types, and others) requires special rules.
  • A minor change to the semantics of inheriting constructors, which makes the semantics more intuitive and consistent with inheritance of other members.
  • A proposal for inline variables, which are basically static storage duration variables (either at namespace scope, or static data members) whose definition can appear inline, and can be defined in a header. This can already be accomplished using a static variable defined locally in an inline function, this proposal just exposes that semantics under a more straightforward syntax. The proposal had some minority opposition (on the basis that it reuses the keyword inline for a purpose somewhat unrelated to its existing use, and that it encourages the use of static storage duration variables to begin with), but otherwise had fairly strong support and was accepted.
  • A proposal to remove the keyword register, but reserve it for future use.

Further work:

  • A tweak to the folding expressions feature added last meeting, which would restrict the set of operators for which a unary fold with an empty parameter pack is valid. Support for this hinges on defining an identity element for the affected operators, but it’s not clear that such a notion is sensible in the presence of operator overloading. For example, consider the following function:
        template <typename... Strings>
        auto concatenate(Strings... strings)
        {
          return strings + ...;
        }
    

    With the current rules, when this function is called with 1 or more strings, it returns the concatenation of its arguments (because strings overload operator + to do concatenation), but when called with no arguments, it returns the integer 0, because that’s defined as the identity element for the addition operator.

    The proposal in question would make it ill-formed to call this function with no arguments; if the author wants that to work, their recourse is to change the unary fold to the binary fold strings + ... + "".

    There was consensus that addition, multiplication, and bitwise operators should be treated this way, but others were more contentious. For example, it was argued that for the logical operators && and ||, you shouldn’t be overloading them to return things that aren’t bool anyways, so the identities true and false remain appropriate.

    A particularly interesting case is the comma operator, for which the specified identity is void(). Comma folds are expected to be common as a technique to emulate statement folding, as in the following example:

        template <typename... Functions>
        auto call_all_functions(Functions... functions)
        {
          functions() , ...;  // expands to function1() , function2() , functionN();
        }
    

    On the one hand, it would be a shame to make people write functions , ... , void() every time they want to do this, and the comma operator generally shouldn’t be overloaded anyways, so keeping the void() identity should be reasonable. On the other hand, if people want to do statement folding, perhaps the language should allow them to do that directly, rather than relying on the comma operator to emulate it with expression folding.

    As there was no consensus on the precise course of action, the matter was referred for further work.

  • The latest modules proposal from the Microsoft folks. More about this in the “Modules” section below.
  • The alternative coroutines proposal that I presented (the author being absent). More about this in the “Coroutines” section below.
  • Contract programming. Two main proposals were discussed, with different flavours. One of them provided an assert-like facility, to be used inside function bodies, primarily for the purpose of runtime checking. The other (there was also a third very similar to it) proposed a syntax for declaring preconditions, postconditions, and invariants for a function in its interface (i.e. in its declaration), primarily for the purpose of static analysis and enabling compiler optimizations. There was consensus that both sets of goals, and both places for writing contracts (interfaces and implementations) are desirable, but there was some debate about whether the proposals should be considered as orthogonal and allowed to proceed independently, or whether the authors should collaborate and come up with a unified proposal that satisfies both use cases. In the end, the direction was to strive for a unified proposal.
  • Default comparisons. Bjarne presented the latest version of his proposal for automatically generating comparison operators for class types. The main features of the proposal are (1) that it’s opt-out, meaning you get the operators by default but you can declare them as = delete if you don’t want them; and (2) it’s carefully designed to avoid breaking existing code as follows: for any comparison operator call site, if name lookup under current rules finds a user-declared operator, it will continue finding that operator under the new rules, rather than using any auto-generated one. The proposal had strong concensus, which was a (welcome) surprise after the lack of consensus on earlier versions (and other proposals in the area) at the previous two meetings. It came close to being approved and sent to CWG, but some details of the semantics remained to be hashed out, so Bjarne was asked to return with an updated proposal in Kona.
    There was another proposal related to comparisons, which pointed out that we are currently waving our hands about matters such as different types of equality and different types of order (total, weak, partial, etc.). To rectify this, it proposed using named functions (e.g. one for each kind for order) instead of operators for comparisons. The feedback was that such machinery is useful to have, but we also want reasonable defaults which are spelt == and <, and as such, the proposed set of named functions can be developed independently of Bjarne’s proposal.
  • A proposal to extend aggregate initialization to be allowed for types with base classes in cases where the base classes are default-constructible. EWG’s feedback was to revise the proposal to also address the use case of providing values to initialize the base classes with in cases where they are not default-constructible.
  • Unified call syntax. This proposal, by Bjarne, seeks to unify the member (x.f(y)) and non-member (f(x, y)) call syntaxes by allowing functions of either kind to be invoked by syntax of either kind. The approach is to have the x.f(y) syntax look for member functions first, and fall back to looking for non-member functions only if the member lookup yields no results; conversely, f(x, y) would look for non-member functions first, and fall back to a member lookup. The resulting semantics are asymmetric (they don’t make x.f(y) and f(x, y) completely interchangeable), but fully backwards compatible. (This design was one of several alternatives Bjarne presented at the previous meeting, and it seemed to have the greatest chance for gaining consensus, primarily due to its backwards compatibility.)

    Beyond aesthetics (“I prefer my function calls to look this way”) and tooling reasons (“member call syntax gives me IntelliSense”), the primary motivation for this feature is facilitating generic proramming, which is expected to become more popular than ever with Concepts. When defining requirements on a template parameter type, either informally in today’s C++ or explicitly with Concepts, you currently have to choose whether the operations on the type are expressed as member of non-member functions. Either choice constrains users of your template: if you choose member functions, they can’t adapt third-party types that they can’t modify to model your concept; if you choose non-member functions, they will likely have to provide a lot of non-member adapters for types that would otherwise automatically model your concept. You could choose to allow either one (this is what C++11 does with the “range” concept used in the range-based for loop: the required operation of getting an iterator to the first element of the range can be spelt either begin(range) or range.begin()), but then your call sites become very unreadable because you need a lot of template/SFINAE magic to express “call X if it exists, otherwise call Y”. A unified call syntax would allow template implementers to use whichever call syntax they like, while users of the template can use either member functions or non-member functions to model the concepts, as they desire / are able to. (C++0x Concepts had a feature called “concept maps” which solved this problem by acting as a bridge between the operations in a concept definition (which is what generic code would use) and the actual operations on a type modelling the concept. However, concept maps were removed from later Concepts designs because they proved very tricky to specify and implement.)

    Unfortunately, this is a very risky change to make to the language. While the proposal itself doesn’t break any existing code, new code that takes advantage of the proposal (that is, code that invokes a non-member function via a member call syntax, or vice versa) is considerably more prone to breakage. For example, adding a new member function to a class can easily break user code which was calling a non-member function of that name via the member function syntax; this breakage can manifest as a compiler error, or as a silent behaviour change, depending on the situation.

    A lot of the finer points of the proposed semantics remain to be nailed down as well. How does the fallback mechanism work – is it activated only if the initial lookup doesn’t find any results, or also if it finds results but they’re all, say, SFINAE’d out? What is the interaction with two-phase name lookup? What happens when the member call syntax is used on an incomplete type?

    EWG was very divided on this proposal; consensus seemed fairly far off. Some people suggested changes to the proposal that would allay some of their concerns with it; one of them was to have classes opt-in to unified call syntax, another to restrict the set of non-member functions that can be found via a member call syntax to those found by ADL. Bjarne said that he intends to continue iterating on the idea.

  • A proposal for overloading operator dot. This would allow creating “smart references”, much as the ability to overload operator -> gives us smart pointers, as well enable many other patterns that take advantage of interface composition. The proposal was generally very well-received; one feature that was somewhat controversial was the ability to declare multiple “overloads” of operator dot that return different types, with the effect of bringing the interfaces of both types into the scope of the declaring type (much how multiple inheritance from the two types would). The author (also Bjarne) was asked to come back with standard wording.
  • A proposal to allow template argument deduction for constructors. The idea here is to avoid having to define factory functions for templated types, such as make_pair(), for the sole purpose of not having to explicitly write out the template argument types in a constructor call of the form pair<T, U>(t, u); the proposal would allow simply pair(t, u).This proposal has been on the table for a while, but it’s been plagued by the problem that for a lot of classes, deduction based on existing constructors wouldn’t work. For example, if a class container<T> has a constructor that takes arguments of type container<T>::iterator, that type is a non-deduced context, so T could not be deduced from a constructor call of the form container(begin, end). The latest version addresses this by allowing class authors to optionally define “canonical factory functions” that define how the class’ template parameters are deduced from constructor arguments. Here’s how one might look like (the syntax is hypothetical):
    template <typename Iter>
    container(Iter a, Iter b) 
        -> container<typename iterator_traits<Iter>::value_type>;

    This basically says “if container is constructed from two iterators, the class’ template parameter is the value type of those iterators”. The question of where to place such a declaration came up; EWG favoured placing it at namespace scope, so as to allow third parties to provide them if desired.

    Another point that was brought up was that a constructor call of the form classname(arguments) where classname is a template class, already has a meaning inside the scope of classname: there, classname without any template arguments means “the current instantiation” (this is called injected class names in standard terminology). The proposal needs to specify whether such a constructor would change meaning (i.e. deduction would be performed instead) or not. The consensus was to try to perform deduction, and fall back to the current instantiation if that fails; this would technically be a breaking change, but the hope is that the scope of any breakage would be minor.

    Overall, the proposal had strong support and is expected to move forward.

  • A proposal to allow a template to have a non-type template parameter whose type is deduced. EWG expressed preference to the syntax template <auto x> and encouraged the author to continue iterating on the idea.
  • A restricted form of static_if; the restrictions are that (1) it can only be used at local scope, (2) each branch of it forms a scope of its own, and (3) non-dependent constructs need to be well-formed in both branches. The proposal was well-received, and the author will continue working on it. It was noted that the Concepts TS doesn’t currently allow evaluating a concept outside a requires-expression, so something like static_if (ConceptName<T>) wouldn’t necessarily work, but hopefully that restriction will be lifted in the near future.
  • Extending static_assert to allow taking for the error message not just a string literal, but any constant expression that can be converted to a string literal. The idea is to allow performing compile-time string formatting to obtain the error message.
  • noexcept(auto), which basically means “deduce the noexcept-ness of this function from the noexcept-ness of the functions it calls. Like return type deduction, this requires the body of the function being available in each translation unit that uses the function. It was brought up that, together with the proposal for making exception specifications part of the type system, this would mean that modifying the function’s body could change the function’s type (again similarly to return type deduction), but people weren’t overly concerned about that.

Rejected:

  • A proposal for allowing return type deduction for explicitly-defaulted and -deleted special member functions. This was rejected because the group realized that it would introduce a footgun: a copy or move assignment operator with an auto return type would return by value!
  • No-op constructors and destructors, which are basically a language hack that would allow certain library optimizations; a previous attempt at enabling said optimizations, destructive move, was presented at the previous meeting. EWG’s feedback was much the same as last time: though it’s dressed differently, the proposal is still an attempt to mess with the language’s lifetime rules, which people are extremely wary of doing. The proposal as written will not move forward, but Chandler Carruth (Google’s lead clang developer) had some ideas about how to allow the desired optimizations by other means, and will discuss them with the author.
  • A proposal for relaxing the rules for forming an array type declarator to allow omitting a dimension anywhere; this would allow forming types such as int[][3][][7], though not instantiating them. The author was seeking to write a multi-dimensional array class where each dimension could be determined statically or dynamically, and use a type of this form as a template parameter and interpret it as a description for which dimensions were static. EWG didn’t find this motivation compelling (the information can be conveyed by other means, such as Array<int, Dimensions<0, 3, 0, 7>>) and was generally wary of adding to the set of types that can be formed but not instantiated (an existing example of such a type is a function type whose return type is a function type).
  • A proposal for generalized dynamic assumptions. EWG liked the use cases, but felt it would make more sense as part of a unified contracts proposal than a stand-alone feature, as contracts also need a syntax to express assumptions.
  • Allowing goto in constexpr functions. The intent here was to plug gratuitous holes between what can be done in a constant expression, and what can be done in regular code. EWG liked the motivation, but preferred to see it together with proposals that plug other holes, such as using lambdas and virtual functions in constant expressions. At least one of those (lambdas) is expected to be proposed in Kona. (Bjarne jokingly wondered whether some day people would be proposing launching threads in a constexpr function.)
  • Delayed evaluation parameters, which is a proposal for evaluating function arguments in a lazy rather than eager fashion (i.e. only evaluating them when their value is needed inside the function, not before calling the function). EWG was intrigued by the idea, but the proposal wasn’t nearly fleshed out enough to be considered as a conrete proposal. Interested people are encouraged to continue exploring the design space.
  • A revised proposal to allow arrays of runtime bound as data members wasn’t looked at specifically, but its presence on the agenda prompted a broader discussion about the Arrays TS, which I talk about in the “Arrays TS” section below.

Modules

There are two groups currently working on modules: Microsoft, in their compiler, and Google, in clang (extending Apple’s clang modules implementation for C and Objective-C, to C++). Microsoft has a draft proposal based on their design; Google hasn’t submitted a proposal based on their design yet.

The two designs differ slightly in philosophy. Microsoft’s design feels like what modules might have looked like if they were part of C++ from the beginning. It’s clean, and promises to be a good fit for new code written in a modular fashion. Google’s design, on the other hand, is geared towards making it possible to incrementally modularize large existing codebases without requiring a significant refactoring or other major changes (at least in the absence of egregiously non-modular design patterns). In other words, Microsoft’s design is more idealistic and pure, and Google’s is more practical.

Most notably, Microsoft’s design essentially requires modularizing a codebase from the bottom-up. For example, if a component of your program uses the C++ standard library, then modularizing that component requires first modularizing the C++ standard library; if the C++ standard library in turn uses the C standard library, then that too must be modularized (which is particularly unfortunate, for two reasons: (1) C standard library headers tend to be notoriously difficult to modularize due to their use of the preprocessor, and (2) they need to remain consumable by non-C++ code). Google’s design, on the other hand, specifically allows modular code to include non-modular code, so you could modularize your program component without having to modularize the C++ standard library.

To be sure, this feature of Google’s design introduces significant implementation complexity. (In my Rapperswil report, I reported that Google claimed their modules implementation was complete. I now understand what they meant was their implementation of a subset of the design that did not include this feature was complete.) I don’t have a deep understanding of the technical issues involved, but from what I’ve gathered, the difficulty is taking multiple copies of entities defined by the same non-modular code included in different modules and “merging” them to view them as a single entity.

There are other differences between the two designs, too. For example, Google’s allows exporting macros from a module, while Microsoft’s does not. Google’s design also supports cyclic dependencies between module interfaces, resolved by forward-declaring an entity from one module in the interface of the other; Microsoft’s proposal has no such support.

EWG spent half a day (and folks from the two groups additional time offline) discussing and trying to reconcile these design differences. The outcome was relatively hopeful about reaching convergence. The Microsoft folks conceded that some abilities, such as forward declaring entities from another module, are necessary. The Google folks conceded that some abilities geared towards making modules work with existing codebases, such as allowing the export of macros, don’t have to be supported directly by the language (they could be handled by compiler flags and such). The two groups agreed to produce a combined design paper for Kona.

In terms of a ship vehicle, the Google folks characterized modules as “the feature with the single greatest implementation impact so far”, and expressed a strong preference for going through a Technical Specification; this route would effectively rule out modules being in C++17, though as a TS the feature would still likely be available to users in the 2017 timeframe. That said, no official decision has been made yet regarding a ship vehicle.

Coroutines

You may recall from my Urbana report that the outcome of the coroutines discussion there was that two flavours of coroutines, stackful and stackless (see the report for an explanation of the distinction) were sufficiently different and both sufficiently motivated by use cases that they deserved to be developed as independent proposals, with a small minority favouring trying to unify them.

Since Urbana there has been progress in all of these directions, with four papers coming back for consideration at this meeting: an iteration on the stackless proposal, an iteration on the stackful proposal, and two different attempts to unify the two approaches. EWG looked at two of these.

The stackless proposal, called “resumable functions” and championed by Microsoft, is the most mature one. It has already gone through numerous rounds of review in SG 1 (the Concurency Study Group), and is close to the stage where standard wording for it can be written. Its discussion in EWG mostly concerned details such as what to call the traits and functions involved in the proposal (there was no consensus to change from the current coroutine_ prefix), whether deducing that a function is resumable by the presence of await expressions in its body without annotating the declaration with a keyword like resumable is implementable (implementers agreed that it was as long as return statements in such a function were spelt differently), and whether yield is a reasonable keyword to standardize (consensus was that it was not, and so we’re going to get keywords prefixed with co- such as coyield and coreturn instead). Ultimately, the proposal author was given the go-ahead to write standard wording and go to CWG.

The other proposal EWG looked at was one of the attempts to unify stackful and stackless coroutines, called resumable expressions. I presented this paper because the author, Chris Kohlhoff, couldn’t make it to the meeting and I was somewhat familiar with the topic as a result of corresponding with him. Unlike resumable functions, this proposal was in the early design stage. The premise was that you could “have your cake and eat it too” by leveraging the compiler’s ability to analyze your code to avoid annotating calls to resumable functions at every level the way you have to do with await (the weakness of stackless coroutines compared to stackful), while still only requiring the allocation of as much memory as you need (the advantage of stackless over stackful). The problem was that the compiler analysis can’t see past translation unit boundaries, thus still requiring annotations there. There were also concerns about the performance cross-translation unit calls compared to resumable functions; Chris was convinced that it was no slower than resumable functions, but unfortunately I didn’t have a sufficiently good comparative understanding of the implementation models to successfully argue this point. The final opinion on the proposal was divided: some people saw imagination in it, and wanted to see it developed further; others didn’t appreciate the fact that a competing proposal to resumable functions was brought up at such a late stage, risking the standardization of the latter.

You might ask how it even makes sense for resumable functions to be sent to CWG without resumable expressions being categorically rejected. The answer is twofold; first, it’s looking like resumable functions will target a Technical Specification rather than C++17, which means there’s room for competing proposals to be developed in parallel. Second, even if it were targeting the standard, it’s conceivable that multiple kinds of coroutines can co-exist in the language (certainly in Urbana the consensus was the stackful and stackless coroutines should coexist). In any case, Chris plans to attend the Kona meeting and presumably present and updated version of the resumable expressions proposal.

The other two papers (the stackful one and a different unification attempt) were only looked at briefly by SG 1, as the author (same person for both) wasn’t present.

Arrays TS

The Array TS, which contains a language feature called “arrays of runtime bound” (ARBs) that’s essentially a toned-down version of C’s variable-length arrays (VLAs), and a library class dynarray for wrapping such a thing into a container interface, has been in limbo for the past year, as attempts to implement dynarray ran into difficulties, and proposals trying to replace it with something implementable got shot down one after the other.

At this meeting, EWG reviewed the status quo and addressed the question of what will happen to the Arrays TS going forward.

The status quo is this:

  • Many people want simple stack arrays. Emphasis on simple (no “making it a member of a class”) and stack (no “it might be on the stack or it might be on the heap” business.)
  • Some people want to be able to wrap such things into a class interface, so it knows its size, and doesn’t automatically decay to a pointer.
  • Some people additionally want to be able to copy this class and make it a member of other classes.
    • Implementation attempts have essentially demonstrated that this latter thing is impossible.

Given this state of affairs, the features currently in the Arrays TS are not going to be accepted in their current form; EWG recommended stripping the TS of its current contents, and waiting for a workable proposal to come along.

A promising direction for such a workable proposal is to have a “magic type” that acts like an ARB but knows its size and does not decay to a pointer (the implementable features that people wanted from a class wrapper). The type in question could only used for a local variable, and the underlying ARB itself wouldn’t be exposed. Several people expressed an interest in collaborating on a proposal in this direction.

Libary / Library Evolution Working Groups

With all the exciting action in EWG, I didn’t have much of a chance to follow progress on the library side in any detail, but here’s what I’ve gathered during the plenary sessions.

Note that I listed the library features accepted into C++17 at this meeting in the “C++17″ section above.

The following proposals were accepted into the second Library Fundamentals TS:

The following proposals failed to gain consensus:

  • A proposal to include certain special math functions which have been standardized independently, into C++17. The primary objection was the cost to implementors for what was perceived by some as a relatively niche user base.
  • Multidimensional bounds, offset and array_view was proposed for acceptance into Library Fundamentals II, but was voted down over issues that still remain to be addressed.

The list of proposals still under review is very long, but here are some highlights:

  • Eric Niebler’s suggested design for customization points was reviewed favourably; Eric was encouraged to experiment more with the idea and come back.
  • A proposal for nothrow-swappable traits was reviewed favourably, and the author was given guidance to put forward all of the traits mentioned in the paper.
  • A lot of time was spent reviewing a proposal for a variant class; as one of the few “vocabulary types” still missing from the standard library, this is considered very important. A lot of the discussion centred around whether variant should have an empty state, and if not, how to deal with the scenario where during assignment, the copy constructor of the right-hand side object throws. boost::variant deals with this by incurring heap allocation, which is widely considered undesirable. I believe the prevailing consensus was to have an empty state, but only allow it to arise in this exceptional situation (pun intended), and make accessing the variant in its empty state (other than assigning a new value to it) undefined behaviour; this way, ordinary code isn’t burdened with having to worry about or check for the empty state.
  • LEWG is continuing to review the Networking TS based on Boost.ASIO.

Ranges

I split out ranges into its own section because I believe it deserves special mention.

As I described in my Urbana report, Eric Niebler came to that meeting with a detailed and well fleshed-out design for ranges in the standard library. It was reviewed very favourably, and Eric was asked to “conceptify” it – meaning express the concepts defined by the proposal using the features of the Concepts TS – and develop the result into a TS. This TS would form part of an “STLv2″ refresh of the standard library which wouldn’t be subject to strict backwards-compatibility constraints with the current standard library.

Eric did not delay in doing so: he came back in Lenexa with a conceptified proposal written up as a draft TS. LEWG began a design review of this proposal, and made good progress on it; they hope to complete the review during a post-meeting teleconference and then forward the proposal to LWG.

Study Groups

Note that some Study Groups whose proposals have progressed to the main working groups and which don’t plan on looking at new proposals, are no longer considered active – for example, SG 2 (Modules) is in this category, as the modules proposal is now in front of EWG. I talk a bit about the ones still active below.

SG 1 (Concurrency)

With the Parallelism TS published and the Concurrency TS sent out for its PDTS ballot, SG 1 has been working on the second iterations of both TS’s.

Here are some of the proposals being considered for Parallelism II:

And for Concurrency II:

There are numerous other proposals in front of the SG as well which don’t have a target ship vehicle yet.

SG 5 (Transactional Memory)

SG 5 has accomplished commendable work by publishing the Transactional Memory TS, but they aren’t quite calling it quits! They plan to meet again in Kona to discuss proposals for possible future standardization in the area.

SG 6 (Numerics)

SG 6 met and looked at three specific proposals:

The first, as I described in the “Library” section above, was unsuccessfully proposed for acceptance into C++17.

The other two are being considered for a Numerics TS, along with a number of other proposals not specifically discussed at this meeting, such as unbounded-precision integer types, rational numbers, fixed-point arithmetic, decimal floating-point support, various utility functions (GCD/LCM, constexpr bitwise operations), random numbers (a couple of different proposals), and other topics for which there aren’t concrete proposals yet. (Obviously, not everything in this list will be in a single Numerics TS.)

Said Numerics TS is still in a relatively early stage; a working draft is not yet expected in Kona.

SG 7 (Reflection)

SG 7 had a busy evening session where they looked at a number of reflection-related proposals:

  • A language feature for argument stringization (misleadingly titled “parameter stringization”), attempting to obsolete one of the few remaining uses for the preprocessor. The proposal was well-received; of the various syntactic options presented, SG 7 preferred the approach of adding an annotation to a function declaration that makes the stringized arguments corresponding to all parameters available in the function body under a syntax such as function_name.param<N>()
  • Potential improvements to source-code information capture. Two specific improvements were discussed: the ability to query the offset relative to the start of the file (in addition to a line number + offset relative to the source of the line), which meet with approval, and fine-grained intrinsics (e.g. querying the line number and the function name separately), for which SG 7 recommended waiting until the original proposal has implementation experience.
  • A proposal for language features to facilitate writing test code; consensus was that this topic is not sufficiently relevant to reflection, and should be pursued elsewhere (such as in EWG).
  • An updated version of a detailed, comprehensive proposal for static reflection (see also a slightly shorter paper outlining use cases). When the original version was presented in Urbana, it was given very short shrift, mostly because it had no presenter, and no one had had time to study it in any amount of detail. This time around, participants seemed to be more informed about it, and ended up being rather favourable to the overall approach. Most notably, the use of a “magic operator” (spelled mirrored(entity) in the proposal) rather than templates (such as reflect<entity> as the previously-discussed type property queries proposal did) opens the doors to reflecting more kinds of entities, such as typedefs (as distinct from their underlying types), uninstantiated templates, and namespaces, which SG 7 viewed as valuable. Interest in further development of the proposal was expressed.
  • Another reflection proposal, type reflection via variadic template expansion. Due to time constraints, this could only be presented very briefly. SG 7 expressed interest in a comparative analysis of the expressive power of this proposal compared to the “type property queries” and “static reflection” proposals.

Notably absent from the agenda was the latest version of the “type property queries” proposal, which had appeared to be the leading comprehensive reflection proposal in the past few meetings. The main reason it wasn’t presented was that the author couldn’t make it to the meeting, though one could also argue that SG 7 was already familiar with the overall design (the changes since the last version having been only minor), so time was better spent looking at proposals with alternative designs that still needed analysis.

SG 10 (Feature Test)

SG 10 is continuing to maintain the list of feature testing macros and keep it up to date with new things like C++17 features.

They also have a proposal for some new preprocessor features to aid feature detection: a __has_include() predicate for testing for the presence of an include, and a __has_cpp_attribute() predicate for testing for support for a particular attribute.

SG 12 (Undefined Behaviour)

SG 12 is still active, but did not meet this week as its members were busy advancing other proposals.

SG 13 (I/O)

SG 13 did not meet this week; the author of the 2D graphics proposal plans to complete standard wording for it for Kona.

*NEW* SG 14 (Game Development & Low-Latency Applications)

C++ aims to be a “zero-overhead abstraction” language, where you don’t pay for what you don’t use. It does a formidable job at this, but for some communities of C++ users, it could do an even better job.

The big sticking points are exceptions and RTTI, two language features for which you pay a bit even if you don’t use them. Projects concerned about this overhead commonly use compiler flags like -fno-exceptions and -fno-rtti, but the committee views these as nonconforming and doesn’t give them a lot of consideration. As a result, for example, a lot of standard library features require exceptions, and are therefore unusable in these projects.

There is also desire for more guarantees from the standard library, such as “empty containers never allocate” or “strings employ the small-string optimization”.

EWG looked at a a wishlist containing the above and more, and recommended creating a new Study Group to explore the area further.

The new Study Group, SG 14, will tentatively be called “Game Development & Low-Latency Applications”, because these are the most impacted communities, and the communities from which the committee hopes to get significant input.

There is a tentative plan for SG 14 to meet independently of the committee at CppCon and GDC, the idea being that members of the game development community are more likely to be able to make it to those events than to committee meetings.

Next Meeting

The next meeting of the Committee will be in Kona, Hawaii, the week of October 19th, 2015.

Conclusion

On the whole, this was a very exciting meeting! My highlights:

  • The Concepts TS being very close to publication. Concepts has been the feature I’ve been waiting for the most; I think it will revolutionize generic programming and allow C++ users to unleash the power of templates in unprecedented ways.
  • C++17 starting to take shape. While it’s uncertain whether some larger pieces like modules and coroutines will make it in, I think it will still have a good complement of features. Among other things, the success of the most recent default comparisons proposal, after the lack of consensus that plagued the ones that came before, is very encouraging – a good success story for the standards process.
  • Modules being on track to be available in the 2017 timeframe, even if it’s uncertain whether it will be part of C++17 itself or in the form of a Technical Specification.

Things are only going to get more interesting as C++17 starts to take more concrete shape, and more Technical Specifications cross the finish line. Stay tuned!

Author’s Note (2015-06-10): In the original version of this post, I made a claim about Modules not making it into C++17 that sounded authoritative, when in fact it was largely speculation. I have edited the post to clarify that no official decision has yet been made about the ship vehicle for Modules. My apologies for this mischaracterization, and my thanks to Gabriel Dos Reis for pointing it out.


Daniel StenbergI lead the curl project and this is how it works

I did this 50 minute talk on May 21 2015 for a Swedish company. With tongue in cheek subtitled “from hobby to world domination”. I think it turned out pretty decent and covers what the project is, how we work on it and what I do to make it run. Some of the questions are not easy to hear but in general it works out fine. Enjoy!

David Rajchenbach TellerRe-dreaming Firefox (3): Identities

Gerv’s recent post on the Jeeves Test got me thinking of the Firefox of my dreams. So I decided to write down a few ideas on how I would like to experience the web. Today: Identities. Let me emphasise that the features described in this blog post do not exist.

Sacha has a Facebook account, plus two Gmail accounts and one Microsoft Live identity. Sacha is also present on Twitter, both with a personal account, and as the current owner of his company’s account. Sacha also has an account on his bank, another one on Paypal, and one on Amazon. With any browser other than Firefox, Sacha’s online life would be a bit complicated.

For one thing, Sacha is logged to several of these accounts most of the time. Sacha has been told that this makes him easy to track, not just when he’s on Facebook, but also when he visits blogs, or news sites, or even shopping sites, but really, who has time to log off from any account? With any other browser, or with an older version of Firefox, Sacha would have no online privacy. Fortunately, Sacha is using Firefox, which has grown pretty good at handling identities.

Indeed, Firefox knows the difference between Facebook’s (and Google’s, etc.) main sites, for which Sacha may need to be logged, and the tracking devices installed on other sites through ads, or through the Like button (and Google +1, etc.), which are pure nuisances. So, even when Sacha is logged on Facebook, his identity remains hidden from the tracking devices. To put it differently, Sacha is logged to Facebook only on Facebook tabs, and only while he’s using Facebook in these tabs. And since Sacha has two GMail accounts, his logging on each account doesn’t interact with the other account. This feature is good not only for privacy, but also for security, as it considerably mitigates the danger of Cross-Site Scripting attacks. Conversely, if a third-party website uses Facebook as an identity provider, Firefox can detect this automatically, and handle the log-in.

Privacy doesn’t stop there. Firefox has a database of Terms of Service for most websites. Whenever Firefox detects that Sacha is entering his e-mail address, or his phone number, or his physical address, Firefox can tell Sacha if he’s signing up for spam or telemarketing – and take measures to avoid it. If Sacha is signing up for spam, Firefox can automatically create an e-mail alias specific to this website, valid either for a few days, or forever. If Sacha has a provider of phone aliases, Firefox can similarly create a phone alias specific to the website, valid either for a few days, or forever. Similarly, if Sacha’s bank offers temporary credit card numbers, Firefox can automatically create a single-transaction credit card number.

Firefox offers an Identity Panel (if we release this feature, it will, of course, be called Persona) that lets Sacha find out exactly which site is linked to which identity, and grant or revoke authorizations to log-in automatically when visiting such sites, as well as log in or out from a single place. In effect, this behaves as a Internet-wide Single Sign On across identities. With a little help, Firefox can even be taught about lesser known identity providers, such as Sacha’s company’s Single Sign On, and handle them from the same panel. That Identity Panel also keeps track of e-mail aliases, and can be used to revoke spam- and telemarketing-inducing aliases in just two clicks.

Also, security has improved a lot. Firefox can automatically generate strong passwords – it even has a database of sites which accept accept passphrases, or are restricted to 8 characters, etc. Firefox can also detect when Sacha uses the same password on two unrelated sites, and explain him why this is a bad idea. Since Firefox can safely and securely share passwords with other devices and back them up into the cloud, or to encrypted QR Codes that Sacha can safely keep in his wallet, Sacha doesn’t even need to see passwords. Since Firefox handles the passwords, it can download every day a list of websites that are known to have been hacked, and use it to change passwords semi-automatically if necessary.

Security doesn’t stop there. The Identity Panel knows not only about passwords and identity providers, but also about the kind of information that Sacha has provided to each website. This includes Sacha’s e-mail address and physical address, Sacha’s phone number, and also Sacha’s credit card number. So when Firefox finds out that a website to which Sacha subscribes has been hacked, Sacha is informed immediately of the risks. This extends to less material information, such as Sacha’s personal blog of vacation pictures, which Sacha needs to check immediately to find out whether they have been defaced.

What now?

I would like to browse with this Firefox. Would you?


Hannah KaneMedium-term roadmap

Earlier this week, I wrote about the short-term roadmap for teach.mozilla.org. Now I’d like to share a few details about what we envision a little farther out (Q3 and into Q4).

Badges

The bulk of the work here will be improving the user experience for both badge applicants and badge reviewers. We’ll also be rolling out some new badges that are aligned with our programmatic plans, and will recognize the key volunteer roles we’ve identified (i.e. Regional Coordinators and Club Captains).

Directory

I’m really excited about this project because it will transform the site from simply being a place to find resources to a community, and because we’ll be able to offer more customized experiences for users once we know more about them. The Mozilla Learning Network Directory will include rich mentor profiles and group pages (where “groups” include Clubs, Hives, and organizations), as well as the ability to search and browse. The initial build will also include a full integration of Discourse. (We’re drawing heavily on the Hive Directory for inspiration.)

Curriculum functionality

It’s been a long time coming, but soon we’ll begin designing a more permanent solution for making our curriculum content dynamic. This will include adding basic user interactions (“Likes,” ratings, comments), as well as dynamically facilitating the creation and display of remixes and translations. We’ll likely also have a tool for users to create and share their own playlists, and to submit curriculum for consideration.

Ongoing iteration on the engagement flow

We’ll continue to learn what works in terms of connecting people quickly to what they need, and we’ll likely continue to make changes as a result of those learnings. Our engagement strategy will get some serious power behind it as we move forward with the email tooling project that’s happening in parallel.

Thimble!

Finally, our team at CDOT are actively working on making improvements to Thimble, our open source code editor for teachers and learners. We wrote about those improvements a few weeks ago.


Air MozillaGerman speaking community bi-weekly meeting

German speaking community bi-weekly meeting Zweiwöchentliches Meeting der deutschsprachigen Community. ==== German speaking community bi-weekly meeting.

Ben HearsumBuildbot <-> Taskcluster Bridge Now in Production

A few weeks ago I gave a brief overview of the Buildbot <->Taskcluster Bridge that we've been developing, and Selena provided some additional details about it yesterday. Today I'm happy to announce that it is ready to take on production work. As more and more jobs from our CI infrastructure move to Taskcluster, the Bridge will coordinate between them and jobs that must remain in Buildbot for the time being.

What's next?

The Bridge itself is feature complete until our requirements change (though there's a couple of minor bugs that would be nice to fix), but most of the Buildbot Schedulers still need to be replaced with Task Graphs. Some of this work will be done at the same time as porting specific build or test jobs to run natively in Taskcluster, but it doesn't have to be. I made a proof of concept on how to integrate selected Buildbot builds into the existing "taskcluster-graph" command and disable the Buildbot schedulers that it replaces. With a bit more work this could be extended to schedule all of the Buildbot builds for a branch, which would make porting specific jobs simpler. If you'd like to help out with this, let me know!

Air MozillaReps weekly

Reps weekly Weekly Mozilla Reps call

Gregory SzorcChangeset Metadata on hg.mozilla.org

Just a few minutes ago, I deployed some updates to hg.mozilla.org to display more metadata on changeset pages. See 4b69a62d1905, dc4023d54436, and b617a57d6bf1 for examples of what's shown.

We currently display:

  • More detailed pushlog info. (Before you had to load another page to see things like the date of the push.)
  • The list of reviewers, each being a link that searches for other changesets they've reviewed.
  • A concise list of bugs referenced in the commit message.
  • Links to changesets that were backed out by this changeset.
  • On changesets that were backed out, we may also display a message that the changeset was backed out.
  • For Firefox repos, we also display the application milestone. This is the Gecko/app version recorded in the config/milestone.txt file in that changeset. The value can be used to quickly answer the question What versions of Firefox have this changeset.

If you notice any issues or have requests for new features, please file a bug.

This work is built on top of a feature I added to Mercurial 3.4 to make it easier to inject extra data into Mercurial's web templates. We just deployed Mercurial 3.4.1 to hg.mozilla.org yesterday. It's worth noting that this deployment happened in the middle of the day with no user-perceived downtime. This is a far cry from where we were a year ago, when any server change required a maintenance window. We've invested a lot of work into a test suite for this service so we can continuously deploy without fear of breaking things. Moving fast feels so good.

David Rajchenbach TellerRe-dreaming Firefox (2): Beyond Bookmarks

Gerv’s recent post on the Jeeves Test got me thinking of the Firefox of my dreams. So I decided to write down a few ideas on how I would like to experience the web. Today: Beyond Bookmarks. Let me emphasize that the features described in this blog post do not exist.

« Look, here is an interesting website. I want to read that content (or watch that video, or play that game), just not immediately. » So, what am I going to do to remember that I wish to read it later:

  1. Bookmark it?
  2. Save it to disk?
  3. Pocket it?
  4. Remember that I saw it and find it in my history later?
  5. Remember that I saw it and find it in my Awesome Bar later?
  6. Hope that it shows up in the New Tab page?
  7. Open a tab?
  8. Install the Open Web App for that website?
  9. Open a tab and put that tab in a tab group?

Wow, that’s 9 ways of fulfilling the same task. Having so many ways of doing the same thing is not a very good sign, so let’s see if we can find a way to unify a few of these abstractions into something more generic and powerful.

Bookmarking is saving is reading later

What are the differences between Bookmarking and Saving?

  1. Bookmarking keeps a URL, while Saving keeps a snapshot.
  2. Bookmarks can be used only from within the browser, while Saved files can be used only from without.

Merging these two features is actually quite easy. Let’s introduce a new button, the Awesome Bookmarks which will serve as a replacement for both the Bookmark button and Save As.

  • Clicking on the Awesome Bookmarks icon saves both the URL to the internal database and a snapshot to the Downloads directory (also accessible through the Downloads menu).
  • Opening an Awesome Bookmark, whether from the browser or from the OS both lead the user to (by default) the live version of the page, or (if the computer is not connected) to the snapshot.
  • Whenever visiting a page that has an Awesome Bookmark, the Awesome Bookmark icon changes color to offer the user the ability to switch between the live version or the snapshot.
  • The same page can be Awesome Bookmarked several times, offering the ability to switch between several snapshots.

By switching to Awesome Bookmarks, we have merged Saving, Bookmarking and the Read it Later list of Pocket. Actually, since Firefox already offers Sync and Social Sharing, we have just merged all the features of Pocket.

So we have removed collapsed items from our list into one.

Bookmarks are history are tiles

What are the differences between Bookmarks and History?

  1. History is recorded automatically, while Bookmarks need to be recorded manually.
  2. History is eventually forgotten, while Bookmarks are not.
  3. Bookmarks can be put in folders, History cannot.

Let’s keep doing almost that, but without segregating the views. Let us introduce a new view, the Awesome Pages, which will serve as a replacement for both Bookmarks Menu and the History Menu.

This view shows a grid of thumbnails of visited pages, iOS/Android/Firefox OS style.

  • first the pages visited most often during the past few hours (with the option of scrolling for all the pages visited during the past few hours);
  • then the Awesome Bookmarks (because, after all, the user has decided to mark these pages)/Awesome Bookmarks folders (with the option of scrolling for more favourites);
  • then, if the user has opted in for suggestions, a set of Awesome Suggested Tiles (with the option of scrolling for more suggestions);
  • then the pages visited the most often today (with the option of scrolling for the other pages visited today);
  • then the pages visited most often this week (with the option of scrolling for the other pages visited this week);

By default, clicking on an Awesome Bookmark (or history entry, or suggested page, etc.) for a page that is already opened switches to that page. Non-bookmarked pages can be turned into Awesome Bookmarks trivially, by starring them or putting them into folders.

An Awesome Bar at the top of this Awesome Pages lets users quickly search for pages and folders. This is the same Awesome Bar that is already at the top of tabs in today’s Firefox, just with the full-screen Awesome Pages replacing the current drop-down menu.

Oh, and by the way, this Awesome Pages is actually our new New Tab page.

By switching to the Awesome Pages, we have merged:

  • the history menu;
  • the bookmarks menu;
  • the new tab page;
  • the awesome bar.

Bookmarks are tabs are apps

What are the differences between Bookmarks and Tabs?

  1. Clicking on a bookmark opens the page by loading it, while clicking on a tab opens the page by switching to it.

That’s not much of a difference, is it?

So let’s make a few more changes to our UX:

  • Awesome Bookmarks record the state of the page, in the style of Session Restore, so clicking on an Awesome Bookmark actually restores that page, whenever possible, instead of reloading it;
  • The ribbon on top of the browser, which traditionally contains tabs, is actually a simplified display of the Awesome Pages, which shows, by default, the pages most often visited during the past few hours;
  • Whether clicking on a ribbon item switches to a page or restores it is an implementation detail, which depends on whether the browser has decided that unloading a page was a good idea for memory/CPU/battery usage;
  • Replace Panorama with the Awesome Page, without further change.

So, with a little imagination (and, I’ll admit, a little hand-waving), we have merged tabs and bookmarks. Interestingly, we have done that by moving to an Apps-like model, in which whether an application is loaded or not is for the OS to decide, rather than the user.

By the way, what are the differences between Tabs and Open Web Apps?

  1. Apps can be killed by the OS, while Tabs cannot.
  2. Apps are visible to the OS, while Tabs appear in the browser only.

Well, if we decide that Apps are just Bookmarks, since Bookmarks have been made visible to the OS in section 1., and since Bookmarks have just been merged with Tabs which have just been made killable by the browser, we have our Apps model.

We have just removed three more items from our list.

What’s left?

We are down to one higher-level abstraction (the Awesome Bookmark) and one view of it (the Awesome Page). Of course, if this is eventually released, we are certainly going to call both Persona.

This new Firefox is quite different from today’s Firefox. Actually, it looks much more like Firefox OS, which may be a good thing. While I realize that many of the details are handwavy (e.g. how do you open the same page twice simultaneously?), I believe that someone smarter than me can do great things with this preliminary exploration.

I would like to try that Firefox. Would you?


Mark SurmanThe essence of web literacy

Read. Write. Participate. These words are at the heart of our emerging vision for Mozilla Learning (aka Academy). Whether you’re a first time smartphone user, a budding tech educator or an experienced programmer, the degree to which you can read, write and participate in the digital world shapes what you can imagine — and what you can do. These three capabilities are the essence of Mozilla’s definition of web literacy.

FullSizeRender

As we began thinking more about Mozilla Learning over the past month, we started to conclude that this read | write | participate combination should be the first principle behind our work. If a project doesn’t tie back to these capabilities, it should not be part of our plan. Or, put positively, everything we do should get people sharing know-how and building things on the web in a way that helps them hone their read | write | participate mojo.

Many existing Mozilla projects already fit this criteria. Our SmartOn series helps people get basic knowledge on topics like privacy. Mozilla Clubs brings together people who want to teach and learn core web literacy skills. And projects like OpenNews bring together skill developers who are honing their skills in open source and collaboration while building the next wave of news on the web. These projects may seem disparate at first, but they all help people learn, hone and wield the ability to read, write and participate on the web.

If we want to insert this minimalist version of web literacy into the heart of our work, we’ll need to define our terms and pressure test our thinking. My working definition of these concepts is:

  • Read: use and understand the web with a critical mind. Includes everything from knowing what a link is to bullshit detection.
  • Write: create content and express yourself on the web. Includes everything from posting to a blog to remixing found content to coding.
  • Participate: interact with others to make your own experience and the web richer. Includes everything from basic collaboration to working in the open.

On the idea of pressure testing our framework: the main question we’ve asked so far is ‘are these concepts helpful if we’re talking about people across a wide variety of skill levels?’ Does a first time smartphone user really need to know how to read, write and participate? Does a master coder still have skills to hone in these areas? And skills to share? Also, how does our existing basic web literacy grid hold up to these questions?

Laura de Reynal and I have been running different versions of this pressure test with people we work with over the last month or so. Laura has been talking to young people and first time smartphone users. I’ve been talking to people like Shuttleworth Fellows and participants at eLearning Africa who are emerging leaders in various flavours of ‘open + tech’. Roughly, we asked each of them to list a thing they know how to do or want to know how to do in each of the read | write | participate areas. In most cases, people understood our question with little explanation and got excited about what they knew and what they could learn. Many also expressed a pride and willingness to share what they know. By this quick and dirty measure, read | write | participate passed the test of being applicable to people with a wide variety of skills and sophistication.

One notable result from the groups I talked to: they all encouraged Mozilla to be incredibly opinionated about ‘what kind of reading, writing and participating’ matters most. In particular, a number of them stressed that we could do a lot of good in the world by helping people learn and hone the sort of ‘working in the open’ participation skills that we practice every day. Backing this up, evaluation research we’ve done recently shows that the educators in the Hive and fellows in Open News really value this aspect of being part of the Mozilla community. It could be that we want to formalize our work on this and make it a focal point within our Mozilla Learning strategy.

Building our work from the last few years, there is a lot more to dig into on web literacy and how it fits into our plans. However, I wanted to get this post up to put a stake in the ground early to establish read | write | participate as the north star to which all Mozilla Leading efforts must align. Being clear about that makes it easier to have discussions about what we should and shouldn’t be doing going forward.

As a next step to dig deeper, Chris Lawrence has agreed to form a web literacy working group. This group will go back into the deeper work we’ve done on the web literacy map, tying that map into read | write | participate and also looking at other frameworks for things like 21st century skills. It should form in the next couple of weeks. Once it has, you’ll be able to track it and participate from the Mozilla Learning planning wiki.


Filed under: mozilla

Benjamin KerensaDon’t Celebrate USA Freedom Act Passage

This Phone is TappedThis Phone is Tapped, Tony Webster (CC by 2.0)

Mozilla recently announced it’s support for the USA Freedom Act alongside allies like the EFF, but the EFF also ended up withdrawing its support because of deficiencies in the legislation and a recent opinion from an appeals court.

I think Mozilla should have withdrawn its support on this still flawed bill because while it did push forward some important reforms it still extended flawed sections of the law that infringe on individual’s civil liberties such as Section 206 “Roving Wiretap” authority program. This program essentially allows the FBI access to any phone line, mobile communications or even internet connections a suspect may be using without ever having to provide a name to anyone. This is clearly not good legislation because it allows overreach and lacks a requirement that communications or accounts being tapped are tied to the subject. While this is just one example there are many other provisions that allow intelligence and law enforcement agencies to continue their spying, just not as broadly as before.

What we need is smarter legislation that allows law enforcement and intelligence agencies to do their work without infringing on the privacy or civil liberties of everyday Americans, you know, like back when domestic spying was entirely illegal.

Wikipedia does a great job of documenting some good information about the USA Freedom Act and I would encourage folks to check out the article and research this piece of legislation more. Remember this bill only passed because it had concessions for Pro-Intelligence legislators, the same folks who created the Patriot Act and opened up spying on Americans in the first place.

I think Mozilla could have done better by withdrawing support and it is good to see that while the EFF is celebrating some parts of the USA Freedom Act it is also mourning some of the concessions.

Armen Zambranomozci 0.7.2 - Support b2g jobs that still run on Buildbot

There are a lot of b2g (aka Firefox OS) jobs that still run on Buildbot .
Interestingly enough we had not tried before to trigger one with mozci.
This release adds support for it.
This should have been a minor release (0.8.0) rather than a security release (0.7.2). My apologies!
All jobs that start with "b2g_" in all_builders.txt are b2g jobs that still run on Buildbot instead of TaskCluster (docs - TC jobs on treeherder).


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Armen Zambranomozci 0.7.1 - regression fix - do not look for files for running jobs

This release mainly fixes a regression we introduced in the release 0.7.0.
The change (#220) we introduced checked completed and running jobs for files that have been uploaded in order to trigger tests.
The problem is that running jobs do not have any metadata until they actually complete.
We fixed this on #234.

Contributions

Thanks to @adusca and @glandium for their contributions on this release.

How to update

Run "pip install -U mozci" to update

Major highlights

  • #234 - (bug fix) - Do not try to find files for running jobs
  • #228 - For try, only trigger talos jobs on existing build jobs ** rather than triggering builds for platforms that were not requested
  • #238 - Read credentials through environment variables

Minor improvements

  • #226 - (bug fix) Properly cache downloaded files
  • #228 - (refactor) Move SCHEDULING_MANAGER
  • #231 - Doc fixes

All changes

You can see all changes in here:
0.7.0...0.7.1


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Air MozillaThe Joy of Coding (mconley livehacks on Firefox) - Episode 17

The Joy of Coding (mconley livehacks on Firefox) - Episode 17 Watch mconley livehack on Firefox Desktop bugs!

Selena DeckelmannTaskCluster migration: about the Buildbot Bridge

Back on May 7, Ben Hearsum gave a short talk about an important piece of technology supporting our transition to TaskCluster, the Buildbot Bridge. A recording is available.

I took some detailed notes to spread the word about how this work is enabling a great deal of important Q3 work like the Release Promotion project. Basically, the bridge allows us to separate out work that Buildbot currently runs in a somewhat monolithic way into TaskGraphs and Tasks that can be scheduled separately and independently. This decoupling is a powerful enabler for future work.

Of course, you might argue that we could perform this decoupling in Buildbot.

However, moving to TaskCluster means adopting a modern, distributed queue-based approach to managing incoming jobs. We will be freed of the performance tradeoffs and careful attention required when using relational databases for queue management (Buildbot uses MySQL for it’s queues, TaskCluster uses RabbitMQ and Azure). We also will be moving “decision tasks” in-tree, meaning that they will be closer to developer environments and likely easier to manage keeping developer and build system environments in sync.

Here are my notes:

Why have the bridge?

  • Allows a graceful transition
  • We’re in an annoying state where we can’t have dependencies between buildbot builds and taskcluster tasks. For example: we can’t move firefox linux builds into taskcluster without moving everything downstream of those also into taskcluster
  • It’s not practical and sometimes just not possible to move everything at the same time. This let’s us reimplement buildbot schedulers as task graphs. Buildbot builds are tasks on the task graphs enabling us to change each task to be implemented by a Docker worker, a generic worker or anything we want or need at that point.
  • One of the driving forces is the build promotion project – the funsize and anti-virus scanning and binary moving – this is going to be implemented in taskcluster tasks but the rest will be in Buildbot. We need to be able to bounce between the two.

What is the Buildbot Bridge (BBB)

BBB acts as a TC worker and provisioner and delegates all those things to BuildBot. As far as TC is concerned, BBB is doing all this work, not Buildbot itself. TC knows nothing about Buildbot.

There are three services:

  • TC Listener: responds to things happening in TC
  • BuildBot Listener: responds to BB events
  • Reflector: takes care of things that can’t be done in response to events — it reclaims tasks periodically, for example. TC expects Tasks to reclaim tasks. If a Task stops reclaiming, TC considers that Task dead.

BBB has a small database that associates build requests with TC taskids and runids.

BBB is designed to be multihomed. It is currently deployed but not running on three Buildbot masters. We can lose an AWS region and the bridge will still function. It consumes from Pulse.

The system is dependent on Pulse, SchedulerDB and Self-serve (in addition to a Buildbot master and Taskcluster).

Taskcluster Listener

Reacts to events coming from TC Pulse exchanges.

Creates build requests in response to tasks becoming “pending”. When someone pushes to mozilla-central, BBB inserts BuildRequests into BB SchedulerDB. Pending jobs appear in BB. BBB cancels BuildRequests as well — can happen from timeouts, someone explicitly cancelling in TC.

Buildbot Listener

Responds to events coming from the BB Pulse exchanges.

Claims a Task when builds start. Attaches BuildBot Properties to Tasks as artifacts. Has a buildslave name, information/metadata. It resolves those Tasks.

Buildbot and TC don’t have a 1:1 mapping of BB statuses and TC resolution. Also needs to coordinate with Treeherder color. A short discussion happened about implementing these colors in an artifact rather than inferring them from return codes or statuses inherent to BB or TC.

Reflector

  • Runs on a timer – every 60 seconds
  • Reclaims tasks: need to do this every 30-60 minutes
  • Cancels Tasks when a BuildRequest is cancelled on the BB side (have to troll through BB DB to detect this state if it is cancelled on the buildbot side)

Scenarios

  • A successful build!

Task is created. Task in TC is pending, nothnig in BB. TCListener picks up the event and creates a BuildRequest (pending).

BB creates a Build. BBListener receives buildstarted event, claims the Task.

Reflector reclaims the Task while the Build is running.

Build completes successfully. BBListener receives log uploaded event (build finished), reports success in TaskCluster.

  • Build fails initially, succeeds upon retry

(500 from hg – common reason to retry)

Same through Reflector.

BB fails, marked as RETRY BBListener receives log uploaded event, reports exception to Taskcluster and calls rerun Task.

BB has already started a new Build TCListener receives task-pending event, updates runid, does not create a new BuildRequest.

Build completes successfully Buildbot Listener receives log uploaded event, reports success to TaskCluster.

  • Task exceeds deadline before Build starts

Task created TCListener receives task-pending event, creates BuildRequest Nothing happens. Task goes past deadline, TaskCluster cancels it. TCListener receives task-exception event, cancels BuildRequest through Self-serve

QUESTIONS:

  • TC deadline, what is it? Queue: a task past a deadline is marked as timeout/deadline exceeded

On TH, if someone requests a rebuild twice what happens? * There is no retry/rerun, we duplicate the subgraph — where ever we retrigger, you get everything below it. You’d end up with duplicates Retries and rebuilds are separate. Rebuilds are triggered by humans, retries are internal to BB. TC doesn’t have a concept of retries.

  • How do we avoid duplicate reporting? TC will be considered source of truth in the future. Unsure about interim. Maybe TH can ignore duplicates since the builder names will be the same.

  • Replacing the scheduler what does that mean exactly?

    • Mostly moving decision tasks in-tree — practical impact: YAML files get moved into the tree
    • Remove all scheduling from BuildBot and Hg polling

Roll-out plan

  • Connected to the Alder branch currently
  • Replacing some of the Alder schedulers with TaskGraphs
  • All the BB Alder schedulers are disabled, and was able to get a push to generate a TaskGraph!

Next steps might be release scheduling tasks, rather than merging into central. Someone else might be able to work on other CI tasks in parallel.

Daniel GlazmanIn praise of Rick Boykin and its Bulldozer editor

Twenty years ago this year, Rick Boykin started a side project while working at NASA. That project, presented a few months later as a poster session at the 4th International Web Conference in Boston (look in section II. Infrastructure), was Bulldozer, one of the first Wysiwyg editors natively made for the Web. I still remember his Poster session at the conference as the most surprising and amazing short demo of the conference. His work on Bulldozer was a masterpiece and I sincerely regretted he stopped working on it, or so it seemed, when he left NASA the next year.

I thanked you twenty years ago, Rick, and let me thank you again today. Happy 20th birthday, Bulldozer. You paved the way and I remember you.