Lucas RochaNew tablet UI for Firefox on Android

The new tablet UI for Firefox on Android is now available on Nightly and, soon, Aurora! Here’s a quick overview of the design goals, development process, and implementation.

Design & Goals

Our main goal with the new tablet UI was to simplify the interaction with tabs—read Yuan Wang’s blog post for more context on the design process.

In 36, we focused on getting a solid foundation in place with the core UI changes. It features a brand new tab strip that allows you to create, remove and switch tabs with a single tap, just like on Firefox on desktop.

The toolbar got revamped with a cleaner layout and simpler state changes.

Furthermore, the fullscreen tab panel—accessible from the toolbar—gives you a nice visual overview of your tabs and sets the stage for more advanced features around tab management in future releases.

Development process

At Mozilla, we traditionally work on big features in a separate branch to avoid disruptions in our 6-week development cycles. But that means we don’t get feedback until the feature lands in mozilla-central.

We took a slightly different approach in this project. It was a bit like replacing parts of an airplane while it’s flying.

We first worked on the necessary changes to allow the app to have parallel UI implementations in a separate branch. We then merged the new code to mozilla-central and did most of the UI development there.

This approach enabled us to get early feedback in Nightly before the UI was considered feature-complete.

Implementation

In order to develop the new UI directly in mozilla-central, we had to come up with a way to run either the old or the new tablet UIs in the same build.

We broke up our UI code behind interfaces with multiple concrete implementations for each target UI, used view factories to dynamically instantiate parts of the UI, prefixed overlapping resources, and more.

The new tab strip uses the latest stable release of TwoWayView which got a bunch of important bug fixes and couple of new features such as smooth scroll to position.


Besides improving Firefox’s UX on Android tablets, the new UI lays the groundwork for some cool new features. This is not a final release yet and we’ll be landing bug fixes until 36 is out next year. But you can try it now in our Nightly builds. Let us know what you think!

Kim MoirMozilla pushes - November 2014

Here's November's monthly analysis of the pushes to our Mozilla development trees.  You can load the data as an HTML page or as a json file.

Trends
Not a record breaking month, in fact we are down over 2000 pushes since the last month.

Highlights
10376 pushes
346 pushes/day (average)
Highest number of pushes/day: 539 pushes on November 12
17.7 pushes/hour (average)

General Remarks
Try keeps had around 38% of all the pushes, and gaia-try has about 30%. The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 23% of all the pushes.

Records
August 2014 was the month with most pushes (13,090  pushes)
August 2014 has the highest pushes/day average with 422 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
October 8, 2014 had the highest number of pushes in one day with 715 pushes    







Jeff WaldenWorking on the JS engine, Episode V

From a stack trace for a crash:

20:12:01     INFO -   2  libxul.so!bool js::DependentAddPtr<js::HashSet<js::ReadBarriered<js::UnownedBaseShape*>, js::StackBaseShape, js::SystemAllocPolicy> >::add<JS::RootedGeneric<js::StackBaseShape*>, js::UnownedBaseShape*>(js::ExclusiveContext const*, js::HashSet<js::ReadBarriered<js::UnownedBaseShape*>, js::StackBaseShape, js::SystemAllocPolicy>&, JS::RootedGeneric<js::StackBaseShape*> const&, js::UnownedBaseShape* const&) [HashTable.h:3ba384952a02 : 372 + 0x4]

If you can figure out where in that mess the actual method name is without staring at this for at least 15 seconds, I salute you. (Note that when I saw this originally, it wasn’t line-wrapped, making it even less readable.)

I’m not sure how this could be presented better, given the depth and breadth of template use in the class, in the template parameters to that class, in the method, and in the method arguments here.

Wladimir PalantDumbing down HTML content for AMO

If you are publishing extensions on AMO then you might have the same problem: how do I keep content synchronous between my website and extension descriptions on AMO? It could have been simple: take the HTML code from your website, copy it into the extension description and save. Unfortunately, usually this won’t produce useful results. The biggest issue: AMO doesn’t understand HTML paragraphs and will strip them out (along with most other tags). Instead it will turn each line break in your HTML code into a hard line break.

Luckily, a fairly simple script can do the conversion and make sure your text still looks somewhat okayish. Here is what I’ve come up with for myself:

#!/usr/bin/env python
import sys
import re

data = sys.stdin.read()

# Normalize whitespace
data = re.sub(r'\s+', ' ', data)

# Insert line breaks after block tags
data = re.sub(r'<(ul|/ul|ol|/ol|blockquote|/blockquote|/li)\b[^<>]*>\s*', '<\\1>\n', data)

# Headers aren't supported, turn them into bold text
data = re.sub(r'<h(\d)\b[^<>]*>(.*?)</h\1>\s*', '<b>\\2</b>\n\n', data)

# Convert paragraphs into line breaks
data = re.sub(r'<p\b[^<>]*>\s*', '', data)
data = re.sub(r'</p>\s*', '\n\n', data)

# Convert hard line breaks into line breaks
data = re.sub(r'<br\b[^<>]*>\s*', '\n', data)

# Remove any leading or trailing whitespace
data = data.strip()

print data

This script expects the original HTML code from standard input and will print the result to standard output. The conversions performed are sufficient for my needs, your mileage may vary — e.g. because you aren’t closing paragraph tags or because relative links are used that need resolving. I’m not intending to design some universal solution, you are free to add more logic to the script as needed.

Edit: Alternatively you can use the equivalent JavaScript code:

var textareas = document.getElementsByTagName("textarea");
for (var i = 0; i < textareas.length; i++)
{
  if (window.getComputedStyle(textareas[i], "").display == "none")
    continue;

  data = textareas[i].value;

  // Normalize whitespace
  data = data.replace(/\s+/g, " ");

  // Insert line breaks after block tags
  data = data.replace(/<(ul|\/ul|ol|\/ol|blockquote|\/blockquote|\/li)\b[^<>]*>\s*/g, "<$1>\n");

  // Headers aren't supported, turn them into bold text
  data = data.replace(/<h(\d)\b[^<>]*>(.*?)<\/h\1>\s*/g, "<b>$2</b>\n\n");

  // Convert paragraphs into line breaks
  data = data.replace(/<p\b[^<>]*>\s*/g, "");
  data = data.replace(/<\/p>\s*/g, "\n\n");

  // Convert hard line breaks into line breaks
  data = data.replace(/<br\b[^<>]*>\s*/, "\n");

  // Remove any leading or trailing whitespace
  data = data.trim();

  textareas[i].value = data;
}

This one will convert the text in all visible text areas. You can either run it on AMO pages via Scratchpad or turn it into a bookmarklet (replace !function by void function in the result of this bookmarklet generator, to make sure it works correctly in Firefox).

Yunier José Sosa VázquezPrivacy Coach para Android

Firefox para Android te pone más fácil que nunca tener control sobre tu privacidad en el móvil. Instala Privacy Coach, el nuevo complemento de Firefox que te ayudará a entender las opciones de privacidad para cookies, rastreo, navegación como invitado y mucho más.

Privacy Coach te brinda una vista general de todas las características relacionadas con la privacidad y sus beneficios para ti. También te permite ir inmediatamente a cada configuración de la funcionalidad y hacerla corresponder a tus experiencias de navegación. Ayudándote a escoger las opciones que se adecuan mejor a ti, y todo esto desde un conveniente y único lugar en tu Firefox para Android.

146248

Mozilla tiene una larga historia de darte el control de tu privacidad, esto está incluido en todos sus productos y en su misión para hacer una Web más abierta, más transparente y segura para todos. Aprende más acerca de cómo Mozilla y Firefox protegen tu privacidad en https://www.mozilla.org/en-US/privacy/you/.

Instalar Privacy Coach (sólo para Android).

Tantek ÇelikWhy An Open Source Comms OS (Like @FirefoxOS) Matters

This morning I tried to install "Checky" on my iPod 5 Touch and was rejected.

iPod 5 Touch screenshot of an error while trying to install the Checky app: 'Cannot download / This app is not compatible with your device.'

An app that tracks how often you check your mobile device should work regardless of how connected you are or not. From experience I know it is plenty easy to be distracted by apps and such on an iPod touch.

Secondly, I saw this article on GigaOM: Hope you like iOS 8.1.1, because there’s no going back

Apple has made it technically impossible for most people to install an older version of iOS on iPhones, iPads and iPod touch devices

Which links to this article: Apple closes iOS 8.1 signing window, eliminating chance to downgrade

Apple has closed the signing window for iOS 8.1 on compatible iPhone, iPad and iPod touch models, thereby eliminating the ability for users to downgrade to the software version

There's an expectation when you buy a computer, tablet, or mobile device, that if all else goes wrong (or you want to sell it to someone), you can always reinstall the OS it came from and be on your way.

Or if a software update has a bad regression (a bug in something that used to work fine), the user has the ability to revert to the previous version.

However if you're an iOS device user, you're out of luck. Apple now requires iOS 8.1.1 on your iOS device(s).

This is not a theoretical problem. iOS8 broke facetime: URLs. In particular, if you're running iOS8 (any version thru 8.1.1), and you tap on a facetime: URL with a destination to dial, it will prompt you, then open Facetime, and then do nothing.

What worked fine in iOS7: tapping on a facetime: URL prompts you to make sure you want to call that person, and then launches the Facetime application directly into starting a conversation with that person.

That facetime: URL scheme is proprietary to Apple. No one else uses it. And they broke it. It's one of the URLs in the URLs For People Focused Mobile Communication.

I've deliberately refrained from upgrading to iOS8 for this reason alone. If Apple regressed with such a simple and obvious bug, what other less obvious bugs did they ship iOS8 with?

Contrast this with open source alternatives like FirefoxOS.

You as the user should be in control. If you want to (re)install the original software that your device came with, you should be able to.

The hope is that with an open source alternative, users will have that choice, rather than being locked in, and prevented from returning their devices to the factory settings that they came with.

Mike HommeyLogging Firefox memory allocations

A couple years ago, when I was actively working on integrating jemalloc 3 in the Firefox build, and started investigating some memory usage regression compared to our old fork, I came up with a replace-malloc library for Firefox that would log all the allocations, and allow to replay them in a more consistent (and faster) way in a separate program, such that testing different configurations of jemalloc with the same workload can be streamlined.

A couple weeks ago, I refreshed that work, and made it work on all the tier-1 Firefox desktop platforms. That work is now in the tree instead of on my hard drive, and will allow us to test the effects of jemalloc changes in a better way.

The bulk of how to use this feature is the following:

  • Start Firefox with the following environment variables:
    • on Linux:

      LD_PRELOAD=/path/to/memory/replace/logalloc/liblogalloc.so

    • on Mac OSX:

      DYLD_INSERT_LIBRARIES=/path/to/memory/replace/logalloc/liblogalloc.dylib

    • on Windows:

      MOZ_REPLACE_MALLOC_LIB=/path/to/memory/replace/logalloc/logalloc.dll

    • on all the above:

      MALLOC_LOG=/path/to/log-file

  • Play your workload in Firefox, then close it.
  • Run the following command to prepare the log file for replay:

    python /source/path/to/memory/replace/logalloc/replay/logalloc_munge.py < /path/to/log-file > /path/to/replay.log

  • Replay the logged allocations with the following command:

    /path/to/memory/replace/logalloc/replay/logalloc-replay < /path/to/replay.log

More information and implementation details can be found in the README accompanying the code for that functionality.

Mike HommeyMozilla Build System: past, present and future

The Mozilla Build System has, for most of its history, not changed much. But, for a couple years now, we’ve been, slowly and incrementally, modifying it in quite extensive ways. This post summarizes the progress so far, and my personal view on where we’re headed.

Recursive make

The Mozilla Build System has, all along, been implemented as a set of recursively traversed Makefiles. The way it has been working for a very long time looks like the following:

  • For each tier (group of source directories) defined at the top-level:
    • For each subdirectory in current tier:
      • Build the export target recursively for each subdirectory defined in Makefile.
      • Build the libs target recursively for each subdirectory defined in Makefile.

The typical limitation due to the above is that some compiled tests from a given directory would require a library that’s not linked until after the given directory is recursed, so another target was later added on top of that (tools).

There was not much room for parallelism, except in individual directories, where multiple sources could be built in parallel, but never would sources from multiple directories be built at the same time. So, for a bunch of directories where it was possible, special rules were added to allow that to happen, which led to interesting recursions:

  • For each of export, libs, and tools:
    • Build the target in the subdirectories that can be built in parallel.
    • Build the target in the current directory.
    • Build the target in the remaining subdirectories.

This ensured some extra fun with dependencies between (sub)directories.

Apart from the way things were recursed, all sorts of custom build rules had piled up, some of which relied on things in other directories having happened beforehand, and the build system implementation itself relied on some quite awful things (remember allmakefiles.sh?)

Gradual Overhaul

Around two years ago, we started a gradual overhaul of the build system.

One of the goals was to move away from Makefiles. For various reasons, we decided to go with our own kind-of-declarative (but really, sandboxed python) format (moz.build) instead of using e.g. gyp. The more progress we make on the build system, and the more I think this was the right choice.

Anyways, while we’ve come a long way and converted a lot of Makefiles to moz.build, we’re not quite there yet:

One interesting thing to note in the above graph is that we’ve also been reducing the overall number of moz.build files we use, by consolidating some declarations. For example, some moz.build files now declare source or test files from their subdirectories directly, instead of having one file per directory declare sources and test files local to their own directory.

Pseudo derecursifying recursive Make

Neologism aside, one of the ideas to help with the process of converting the build system to something that can be parallelized more massively was to reduce the depth of recursion we do with Make. So that instead of a sequence like this:

  • Entering directory A
    • Entering directory A/B
    • Leaving directory A/B
    • Entering directory A/C
      • Entering directory A/C/D
      • Leaving directory A/C/D
      • Entering directory A/C/E
      • Leaving directory A/C/E
      • Entering directory A/C/F
      • Leaving directory A/C/F
    • Leaving directory A/C
    • Entering directory A/G
    • Leaving directory A/G
  • Leaving directory A
  • Entering directory H
  • Leaving directory H

We would have a sequence like this:

  • Entering directory A
  • Leaving directory A
  • Entering directory A/B
  • Leaving directory A/B
  • Entering directory A/C
  • Leaving directory A/C
  • Entering directory A/C/D
  • Leaving directory A/C/D
  • Entering directory A/C/E
  • Leaving directory A/C/E
  • Entering directory A/C/F
  • Leaving directory A/C/F
  • Entering directory A/G
  • Leaving directory A/G
  • Entering directory H
  • Leaving directory H

For each directory there would be a directory-specific target per top-level target, such as A/B/export, A/B/libs, etc. Essentially those targets are defined as:

%/$(target):
        $(MAKE) -C $* $(target)

And each top-level target is expressed as a set of dependencies, such as, in the case above:

A/B/libs: A/libs
A/C/libs: A/B/libs
A/C/D/libs: A/C/libs
A/C/E/libs: A/C/D/libs
A/C/F/libs: A/C/E/libs
A/G/libs: A/C/F/libs
H/libs: A/G/libs
libs: H/libs

That dependency list, instead of being declared manually, is generated from the “traditional” recursion declaration we had in moz.build, through the various *_DIRS variables. I’ll skip the gory details about how this replicated the weird subdirectory orders I mentioned above when you had both PARALLEL_DIRS and DIRS. They are irrelevant today anyways.

You’ll also note that this also removed the notion of tier, and the entirety of the tree is dealt with for each target, instead of iterating all targets for each group of directories.

[But note that, confusingly, but for practical reasons related to the amount of code changes required, and to how things were incrementally set in place, those targets are now called tiers]

From there, parallelizing parts of the build only involves reorganizing those Make dependencies such that Make can go in several subdirectories at once, the ultimate goal being:

libs: A/libs A/B/libs A/C/libs A/C/D/libs A/C/E/libs A/C/F/libs A/G/libs A/H/libs

But then, things in various directories may need other things built in different directories first, such that additional dependencies can be needed, for example:

A/B/libs A/C/libs: A/libs
A/H/libs: A/G/libs

The trick is that those dependencies can in many cases be deduced from the contents of moz.build. That’s where we are today, for example, for everything related to compilation: We added a compile target that deals with everything C/C++ compilation, and we have dependencies between directories building objects and directories building libraries and executables that depend on those objects. You can have a taste yourself:

$ mach clobber
$ mach configure
$ mach build export
$ mach build -C / toolkit/library/target

[Note, the compile target is actually composed of two sub-targets, target and host.]

The third command runs the export target on the whole tree, because that’s a prerequisite that is not expressed as a make dependency yet (it would be too broad of a dependency anyways).

The last command runs the toolkit/library/target target at the top level (as opposed to mach build toolkit/library/target, which runs the target target in the toolkit/library directory). That will build libxul, and everything needed to link it, but nothing else unrelated.

All the other top-level targets have also received this treatment to some extent:

  • The export target was made entirely parallel, although it keeps some cross-directory dependencies derived from the historical traversal data.
  • The libs target is still entirely sequential, because of all the horrible rules in it that may or may not depend on the historical traversal order.
  • A parallelized misc target was recently created to receive all the things that are currently done as part of the libs target that are actually safe to be parallelized (another reason for creating this new target is that libs is now a misnomer, since everything related to libraries is now part of compile). Please feel free to participate in the effort to move libs rules there.
  • The tools target is still sequential, but see below.

Although, some interdependencies that can’t be derived yet are currently hardcoded in config/recurse.mk.

And since the mere fact of entering a directory, figuring out if there’s anything to do at all, and leaving a directory takes some time on builds when only a couple things changed, we also skip some directories entirely by making them not have a directory/target target at all:

  • export and libs only traverse directories where there is a Makefile.in file, or where the moz.build file sets variables that do require something to be done in those targets.
  • compile only traverses directories where there is something to compile or link.
  • misc only traverses directories with an explicit HAS_MISC_RULE = True when they have a Makefile.in, or with a moz.build setting variables affecting the misc target.
  • tools only traverses directories that contain a Makefile.in containing tools:: (there’s actually a regexp, but in essence, that’s it).

Those skipping rules allow to only traverse, taking numbers from a local Linux build, as of a couple weeks ago:

  • 186 directories during libs,
  • 472 directories during compile,
  • 161 directories during misc,
  • 382 directories during libs,
  • 2 directories during tools.

instead of 850 for each.

In the near future, we may want to change export and libs to opt-ins, like misc, instead of opt-outs.

Alternative build backends

With more and more declarations in moz.build, files, we’ve been able to build up some alternative build backends for Eclipse and Microsoft Visual Studio. While they still are considered experimental, they seem to work well enough for people to take advantage of them in some useful ways. They however still rely on the “traditional” Make backend to build various things (to the best of my knowledge).

Ultimately, we would like to support entirely different build systems such as ninja or tup, but at the moment, many things still heavily rely on the “traditional” Make backend. We’re getting close to having everything related to compilation available from the moz.build declarations (ignoring third-party code, but see further below), but there is still a long way to go for other things.

In the near future, we may want to implement hybrid build backends where compilation would be driven by ninja or tup, and the rest of the build would be handled by Make. However, my feeling is that the Make backend is fast enough for compilation and ninja doesn’t bring enough other than performance that it’s not worth investing in a ninja backend. Tup is different because it does solve some of the problems with incremental builds.

While on the topic of being close to having everything related to compilation available from moz.build declarations, closing in on this topic would allow, more than such hybrid build systems, to better integrate tools such as code static analyzers.

Unified C/C++ sources

Compiling C code, and even more compiling C++ code, involves reading the same headers an important number of times. In C++, that usually also means instantiating the same templates and compiling the same inline methods numerous times.

So we’ve worked around this by creating “unified sources” that just #include the actual source files, grouping them 16 by 16 (except when specified otherwise).

This reduced build times drastically, and, interestingly, reduced the size of DWARF debugging symbols as well. This does have a couple downsides, though. It allows #include impurity in the code in such a way that e.g. changes in the file groups can lead to subtle build failures. It also makes incremental builds significantly slower in parts of the tree where compiling one file is already somehow slow, so doing 16 at a time can be a drag for people working on that code. This is why we’re considering decreasing the number for the javascript engine in the near future. We should probably investigate where else building one unified source is slow enough to be a concern. I don’t think we can rely on people actively complaining about it, they’ve been too used to slow build times to care to file bugs about it.

Relatedly, our story with #include is suboptimal to say the least, and several have been untangled, but there’s still a long tail. It’s a hard problem to solve, even with tools like IWYU.

Fake libraries

For a very long time, the build system was building intermediate static libraries, and then was linking them together to form shared libraries. This is how the main components were built in the old days before libxul (remember libgklayout?), and was how libxul built when it was created. For various reasons, ranging from disk space waste to linker inefficiencies, this was replaced by building fake libraries, that only reference the objects that would normally be contained in the static library that used to be built. This later was changed to use a more complex system allowing more flexibility.

Fast-forward to today, and with all the knowledge from moz.build, one of the usecases of that more complex system was made moot, and in the future, those fake libraries could be generated as build backend files instead of being created during the build.

Specialized incremental builds

When iterating C/C++ code patches, one usually needs to (re)compile often. With the build system having the overhead it has, and rebuilding with no change taking many seconds (it has been around a minute for a long time on my machine and is now around half of that, although, sadly, I had got it down to 20 seconds but that regressed recently, damn libs rules), we also added a special rule that only handles header changes and rebuilding objects, libraries and executables.

That special rule is invoked as:

$ mach build binaries

And takes about 3.5s on my machine when there are no changes. It used to be faster thanks to clever tricks, but that was regressed on purpose. That’s a trade-off, but linking libxul, which most code changes require, takes much longer than that anyways. If deemed necessary, the same clever tricks could be restored.

While we work to improve the overall build experience, in the near future we should have one or more special rules for non-compilation use-cases. For Firefox frontend developers, the following command may do part of the job:

$ mach build -C / chrome

but we should have some better and more complete commands for them and e.g. Firefox Android developers.

We also currently have a build option allowing to entirely skip everything that is compilation related (--disable-compile-environment), but it is currently broken and is only really useful in few use cases. In the near future, we need build modes that allow to use e.g. nightly builds as if they were the result of compiling C++ source. This would allow some classes of developers to skip compilations altogether, which are an unnecessary overhead for them at the moment, since they need to compile at least once (and with all the auto-clobbers we have, it’s much more than that).

Localization

Related to the above, the experience of building locale packs and repacks for Firefox is dreadful. Even worse than that, it also relies on a big pile of awful Make rules. It probably is, along with the code related to the creation of Firefox tarballs and installers, the most horrifying part of the build system. Its entanglement with release automation also makes improving the situation unnecessarily difficult.

While there are some sorts of tests running on every push, there are many occasions where those tests fail to catch regressions that lead to broken localized builds for nightlies, or worse on beta or release (which, you’ll have to admit, is a sadly late a moment to find such regressions).

Something really needs to be done about localization, and hopefully the discussions we’ll have this week in Portland will lead to improvements in the short to medium term.

Install manifests

The build system copies many files during the build. From the source directory to the “object” directory. Sometimes in $(DIST)/somedir, sometimes elsewhere. Sometimes in both or more. On non-Windows systems, copies are replaced by symbolic links. Sometimes not. There are also files that are preprocessed during the build.

All those used to be handled by Make rules invoking $(NSINSTALL) on every build. Even when the files hadn’t changed. Most of these were replaced by some Makefile magic, but many are now covered with so-called “install manifests”.

Others, defined in jar.mn files, used to be added to jar files during the build. While those jars are not created anymore because of omni.ja, the corresponding content is still copied/symlinked and defined in jar.mn.

In the near future, all those should be switched to install manifests somehow, and that is greatly tied to solving part of the localization problem: currently, localization relies on Make overrides that moz.build can’t know about, preventing install manifests being created and used for the corresponding content.

Faster configure

One of the very first things the build system does when a build starts from scratch is to run configure. That’s a part of the build system that is based on the antiquated autoconf 2.13, with 15+ years of accumulated linear m4 and shell gunk. That’s what detects what kind of compiler you use, how broken it is, how broken its headers are, what options you requested, what application you want to build, etc.

Topping that, it also invokes configure from third-party software that happen to live in the tree, like ICU or jemalloc 3. Those are also based on autoconf, but in more recent versions than 2.13. They are also third-party, so we’re essentially only importing them, as opposed to actively making them bigger for those that are ours.

While it doesn’t necessarily look that bad when running on e.g. Linux, the time it takes to run all this pile of shell scripts is painfully horrible on Windows (like, taking more than 5 minutes on automation). While there’s still a lot to do, various improvements were recently made:

  • Some classes of changes (such as modifying configure.in) make the build system re-run configure. It used to trigger every configure to run again, but now only re-runs a relevant subset.
  • They used to all run sequentially, but apart from the top-level one, which still needs to run before all the others, they now all run in parallel. This cut configure times almost in half on Windows clobber builds on automation.

In the future, we want to get rid of autoconf 2.13 and use smart lazy python code to only run the tests that are relevant to the configure options. How this would all exactly work has, as of writing, not been determined. It’s been on my list of things to investigate for a while, but hasn’t reached the top. In the near future, though, I would like to move all our autoconf code related to the build toolchain (compiler and linker) to some form of python code.

Zaphod beeblebuild

There are essentially two main projects in the mozilla-central repository: Firefox/Gecko and the Javascript engine. They use the same build system in many ways. But for a very long time, they actually relied on different copies of the same build system files, like config/rules.mk or build/autoconf/*.m4. And we had a check-sync-dirs script verifying that both projects were indeed using the same file contents. Countless times, we’ve had landings forgetting to synchronize the files and leading to a check-sync-dirs error during the build. I plead guilty to have landed such things multiple times, and so did many other people.

Those days are now long gone, but we currently rely on dirty tricks that still keep the Firefox/Gecko and Javascript engine build systems half separate. So we kind of replaced a conjoined-twins system with a biheaded system. In the future, and this is tied to the section above, both build systems would be completely merged.

Build system interface

Another goal of the build system changes was to make the build and test experience better. Especially, running tests was not exactly the most pleasant experience.

A single entry point to the build system was created in the form of the mach tool. It simplifies and self-documents many of the workflows that required arcane knowledge.

In the future, we will deprecate the historical build system entry points, or replace their implementation to call mach. This includes client.mk and testing/testsuite-targets.mk.

moz.build

Yet another goal of the build system changes was to improve the experience developers have when adding code to the tree. Again, while there is still a lot to be done on the subject, there have been a lot of changes in the past year that I hope have made developer’s lives easier.

As an example, adding new code to libxul previous required:

  • Creating a Makefile.in file
    • Defining a LIBRARY_NAME.
    • Defining which sources to build with CPPSRCS, CSRCS, CMMSRCS, SSRCS or ASFILES, using the right variable name for the each source type (C++ or C or Obj-C, or assembly. By the way, did you know there was a difference between SSRCS and ASFILES?).
  • Adding something like SHARED_LIBRARY_LIBS += $(call EXPAND_LIBNAME_PATH,libname,$(DEPTH)/path) to toolkit/library/Makefile.in.

Now, adding new code to libxul requires:

  • Creating a moz.build file
    • Defining which sources to build with SOURCES, whether they are C, C++ or other.
    • Defining FINAL_LIBRARY to 'xul'.

This is only a simple example, though. There are more things that should have gotten easier, especially since support for templates landed. Templates allowed to hide some details such as dependencies on the right combination of libxul, libnss, libmozalloc and others when building programs or XPCOM components. Combined with syntactic sugar and recent changes to how moz.build data is handled by build backends, we could, in the future, allow to define multiple targets in a single directory. Currently, if you want to build e.g. a library and a program or multiple libraries in the same directory, well, essentially, you can’t.

Relatedly, moz.build currently suffers from how it was grown from simply moving definitions from Makefile.in., and how those definitions in Makefile.in were partly tied to how Make works, and how config.mk and rules.mk work. Consolidating CPPSRCS, CSRCS and other variables into a single SOURCES variable is something that should be done more broadly, and we should bring more consistency to how things are defined (for example NO_PGO vs. no_pgo depending on the context, etc.). Incidentally, I think those changes can be made in a way that simplifies the build backend python code.

Multipass

Some build types, while unusual for developers to do locally on their machine, happen regularly on automation, and are done in awful or inefficient ways.

First, Profile Guided Optimized (PGO) builds. The core idea for those builds is to build once with instrumentation, run the resulting instrumented binary against a profile, and rebuild with the data gathered from that. In our build system, this is what actually happens on Linux:

  • Build everything with instrumentation.
  • Run instrumented binary against profile.
  • Remove half the object directory, including many non-compiled code things that are generated during a normal build.
  • Rebuild with optimizations guided by the collected data.

Yes, the last step repeats things from the second that are not necessary to be repeated.

Second, Mac universal builds, which happen in the following manner:

  • Build everything for i386.
  • Build everything for x86-64.
  • Merge the result of both builds.

Yes, “everything” in both “Build everything” includes the same non-compiled files. Then the third step checks that those non-compiled files actually match (and for buildconfig.html, has special treatment) and merges the i386 and x86-64 binaries in Mach-o fat binaries. Not only is this inefficient, but the code behind this is terrible, although it got better with the new packager code. And reproducing universal builds locally is not an easy task.

In the future, the build system would be able to compile binaries for different targets in a way that doesn’t require jumping through hoops like the above. It could even allow to build e.g. a javascript shell for the build machine during a cross-compilation for Android without involving wrapper scripts to handle the situation.

Third party code

Building Firefox involves building several third party libraries. In some cases, they use gyp, and we convert their gyp files to moz.build at import time (angle, for instance). In other cases, they use gyp, and we just use those gyp files through moz.build rules, such that the gyp processing is done during configure instead of at import time (webrtc). In yet other cases, they use autoconf and automake, but we use a moz.build file to build them, while still running their configure script (freetype and jemalloc). In all those cases, the sources are handled as if they had been Mozilla code all along.

But in the case of NSPR, NSS and ICU, we don’t necessarily build them in ways their respective build systems (were meant to) allow and rely on hacks around their build system to do our bidding. This is especially true for NSS (don’t look at config/external/nss/Makefile.in if you care about your sanity). On top of using atrocious hacks, that makes the build dependable on Make for compilation, in inefficient ways, at that.

In the future, we wouldn’t rely on NSPR, NSS and ICU build systems, and would build them as if they were Mozilla code, like the others. We need to find ways to allow that while limiting the cost of updates to new versions. This is especially true for ICU which is entirely third party. For NSPR and NSS, we have some kind of foothold. Although it is highly unlikely we can make them switch to moz.build (and in the current state of moz.build, being very tied to Gecko, not possible without possibly significant changes), we can probably come up with schemes that would allow to e.g. easily generate moz.build files from their Makefiles and/or manifests. I expect this to be somewhat manageable for NSS and NSPR. ICU is an entirely different story, sadly.

And more

There are many other aspects of the build system that I’m not mentioning here, but you’ll excuse me as this post is already long enough (and took much longer to write than it really should).

Yunier José Sosa Vázquez[Solucionado] ¿Dónde está el botón Firefox Hello?

Ayer cuando te presentamos la nueva versión de Firefox y sus novedades, destacaba la ausencia de una de sus características principales pero en realidad, siempre estuvo allí. Al parecer, fue un problema con configuración que en otras versiones de Firefox no sucederá. En este artículo podrás encontrar la solución al problema.

Si el botón Firefox Hello firefox_hello_icon no te aparece en la barra de herramientas o en el menú, puede que todavía esté en el panel Herramientas y características adicionales esperando que lo arrastres a donde más te convenga.

Sigue estos pasos:

  1. En la esquina superior derecha del navegador, haz clic en el botón de menú menu.
  2. Haz clic en menuPanel-customize Personalizar para ver las herramientas y características adicionales de Firefox.
  3. Arrastra el icono Hello desde la ventana de herramientas y características adicionales hasta tu barra de herramientas o menú.2014-08-11-19-19-31-cbd63d
  4. Por último, haz clic en Terminar personalización.

Si no te aparece el botón Hello en el panel Herramientas y características adicionales, debes acceder a la página about:config y cambiar la preferencia loop.throttled a false. Después reinicias Firefox y cuando lo abras nuevamente tendrás el icono en la barra de herramientas. Mientras tanto, también puede recibir llamadas de otros usuarios.

Fuente: Ayuda de Firefox

Pascal FinetteFunny Facebook Messenges Exchange

I just had an amazing Facebook Messages exchange with someone who asked me to write for his startup newsletter/website. Guess he’s not quite as empathetic about entrepreneurs as I and many other are. But read for yourself:

Mr X:

Great to meet you Pascal
I am founder at —REDACTED—
I would love to learn about your work

Pascal Finette:

How can I help?

Mr X:

I would like to invite you to write for —REDACTED—

Pascal Finette:

That’s a very kind invitation. But truth be told - between The Heretic, writing a book and all the other work I do, I just have no time to write for another outlet.

Mr X:

What is the book about?

Pascal Finette:

Entrepreneurship.

Mr X:

What is new in the topic?

Pascal Finette:

You tell me.

Mr X:

You just need to do it
Books will not help

Pascal Finette:

Why publish a website about this topic then? Just need to do it. Your website doesn’t help.

Mr X:

I know
It is for wannabees
Do you write about methodology or best practices?

Pascal Finette:

Just so I get this straight - you asked me to write for your website which you consider ‘for wannabees’? So you effectively tell me that you consider my writing ‘for wannabees’? Man, I would consider my approach to this whole thing… That surely is not the way you win partners.

After that last message I didn’t hear back from him. I wonder why?! :)

Michael KaplySunsetting the Original CCK Wizard

In the next few weeks, I'll be sunsetting the original CCK Wizard and removing it from AMO. It really doesn't work well with current Firefox versions anyway, so I'm surprised it still has so many users.

If for some reason you're still using the old CCK Wizard, please let me know why so I can make sure what you need is integrated into the CCK2.

I'm also looking for ideas for new posts for my blog, so if there is some subject around deploying or customizing Firefox that you want to know more about, please let me know.

Luis VillaFree-riding and copyleft in cultural commons like Flickr

Flickr recently started selling prints of Creative Commons Attribution-Share Alike photos without sharing any of the revenue with the original photographers. When people were surprised, Flickr said “if you don’t want commercial use, switch the photo to CC non-commercial”.

This seems to have mostly caused two reactions:

  1. This is horrible! Creative Commons is horrible!”
  2. “Commercial reuse is explicitly part of the license; I don’t understand the anger.”

I think it makes sense to examine some of the assumptions those users (and many license authors) may have had, and what that tells us about license choice and design going forward.

Free ride!!, by https://www.flickr.com/photos/dhinakaran/
Free ride!!, by Dhinakaran Gajavarathan, under CC BY 2.0

Free riding is why we share-alike…

As I’ve explained before here, a major reason why people choose copyleft/share-alike licenses is to prevent free rider problems: they are OK with you using their thing, but they want the license to nudge (or push) you in the direction of sharing back/collaborating with them in the future. To quote Elinor Ostrom, who won a Nobel for her research on how commons are managed in the wild, “[i]n all recorded, long surviving, self-organized resource governance regimes, participants invest resources in monitoring the actions of each other so as to reduce the probability of free riding.” (emphasis added)

… but share-alike is not always enough

Copyleft is one of our mechanisms for this in our commons, but it isn’t enough. I think experience in free/open/libre software shows that free rider problems are best prevented when three conditions are present:

  • The work being created is genuinely collaborative — i.e., many authors who contribute similarly to the work. This reduces the cost of free riding to any one author. It also makes it more understandable/tolerable when a re-user fails to compensate specific authors, since there is so much practical difficulty for even a good-faith reuser to evaluate who should get paid and contact them.
  • There is a long-term cost to not contributing back to the parent project. In the case of Linux and many large software projects, this long-term cost is about maintenance and security: if you’re not working with upstream, you’re not going to get the benefit of new fixes, and will pay a cost in backporting security fixes.
  • The license triggers share-alike obligations for common use cases. The copyleft doesn’t need to perfectly capture all use cases. But if at least some high-profile use cases require sharing back, that helps discipline other users by making them think more carefully about their obligations (both legal and social/organizational).

Alternately, you may be able to avoid damage from free rider problems by taking the Apache/BSD approach: genuinely, deeply educating contributors, before they contribute, that they should only contribute if they are OK with a high level of free riding. It is hard to see how this can work in a situation like Flickr’s, because contributors don’t have extensive community contact.1

The most important takeaway from this list is that if you want to prevent free riding in a community-production project, the license can’t do all the work itself — other frictions that somewhat slow reuse should be present. (In fact, my first draft of this list didn’t mention the license at all — just the first two points.)

Flickr is practically designed for free riding

Flickr fails on all the points I’ve listed above — it has no frictions that might discourage free riding.

  • The community doesn’t collaborate on the works. This makes the selling a deeply personal, “expensive” thing for any author who sees their photo for sale. It is very easy for each of them to find their specific materials being reused, and see a specific price being charged by Yahoo that they’d like to see a slice of.
  • There is no cost to re-users who don’t contribute back to the author—the photo will never develop security problems, or get less useful with time.
  • The share-alike doesn’t kick in for virtually any reuses, encouraging Yahoo to look at the relationship as a purely legal one, and encouraging them to forget about the other relationships they have with Flickr users.
  • There is no community education about the expectations for commercial use, so many people don’t fully understand the licenses they’re using.

So what does this mean?

This has already gone on too long, but a quick thought: what this suggests is that if you have a community dedicated to creating a cultural commons, it needs some features that discourage free riding — and critically, mere copyleft licensing might not be good enough, because of the nature of most production of commons of cultural works. In Flickr’s case, maybe this should simply have included not doing this, or making some sort of financial arrangement despite what was legally permissible; for other communities and other circumstances other solutions to the free-rider problem may make sense too.

And I think this argues for consideration of non-commercial licenses in some circumstances as well. This doesn’t make non-commercial licenses more palatable, but since commercial free riding is typically people’s biggest concern, and other tools may not be available, it is entirely possible it should be considered more seriously than free and open source software dogma might have you believe.

  1. It is open to discussion, I think, whether this works in Wikimedia Commons, and how it can be scaled as Commons grows.

Gervase MarkhamAn Invitation

Ben Smedberg boldly writes:

I’d like to invite my blog readers and Mozilla coworkers to Jesus Christ.

Making a religious invitation to coworkers and friends at Mozilla is difficult. We spend our time and build our deepest relationships in a setting of on email, video, and online chat, where off-topic discussions are typically out of place. I want to share my experience of Christ with those who may be interested, but I don’t want to offend or upset those who aren’t.

This year, however, presents me with a unique opportunity. Most Mozilla employees will be together for a shared planning week. If you will be there, please feel free to find me during our down time and ask me about my experience of Christ.

Amen to all of that. Online collaboration is great, but as Ben says, it’s hard to find opportunities to discuss things which are important outside of a Mozilla context. There are several Christians at Mozilla attending the work week in Portland (roc is another, for example) and any of us would be happy to talk.

I hope everyone has a great week!

Tantek ÇelikRaising The Bar On Open Web Standards: Supporting More Openness

Tantek Çelik wears a blue beanie in honor of the 8th annual Blue Beanie Day Yesterday was the 8th annual Blue Beanie Day celebrating web standards. Jeffrey Zeldman called for celebrating community diversity and pledging “to keep things moving in a positive, humanist direction”. In addition to fighting bad behaviors, we should also push for more good behaviors, more openness, and more access across a more broadly diverse community.

I've written about the open web as well as best bractices for open web standards development before. It's time to update those and raise the bar on what we mean and want as "open".

In summary we should support open web standards that are:

  1. Free (of cost) to read (as opposed to "pay to download" as noted)
  2. Free(dom) to implement (royalty free, CC0)
  3. Free(dom and of cost) to discuss
  4. Free(dom) to update (e.g. by republishing with suggested changes)
  5. Published on the open web itself
  6. Published with open web formats

While some of those criteria have an obvious explanation like no cost to download, others have more subtle and lengthier explanations, like supporting standards licensed with CC0 to allow a more diverse set of communities to make suggestions, e.g. via republishing with changes, as well as direct incorporation of (pseudo)code in those standards into a more diverse set of (e.g. open source) implementations.

Not all web standards, even "open" web standards, are created equal, nor are they equally "open". We should (must) support ever more openness in web standards development as that benefits more contributors, as well as a more rapid evolution of those standards as well.

Adrian GaudebertSocorro: the Super Search Fields guide

Socorro has a master list of fields, called the Super Search Fields, that controls several parts of the application: Super Search and its derivatives (Signature report, Your crash reports... ), available columns in report/list/, and exposed fields in the public API. Fields contained in that list are known to the application, and have a set of attributes that define the behavior of the app regarding each of those fields. An explanation of those attributes can be found in our documentation.

In this guide, I will show you how to use the administration tool we built to manage that list.

You need to be a superuser to be able to use this administration tool.

Understanding the effects of this list

It is important to fully understand the effects of adding, removing or editing a field in this Super Search Fields tool.

A field needs to have a unique Name, and a unique combination of Namespace and Name in database. Those are the only mandatory values for a field. Thus, if a field does not define any other attribute and keeps their default values, it won't have any impact in the application -- it will merely be "known", that's all.

Now, here are the important attributes and their effects:

  • Is exposed - if this value is checked, the field will be accessible in Super Search as a filter.
  • Is returned - if this value is checked, the field will be accessible in Super Search as a facet / aggregation. It will also be available as a column in Super Search and report/list/, and it will be returned in the public API.
  • Permissions needed - permissions listed in this attribute will be required for a user to be able to use or see this field.
  • Storage mapping - this value will be used when creating the mapping to use in Elasticsearch. It changes the way the field is stored. You can use this value to define some special rules for a field, for example if it needs a specific analyzer. This is a sensitive attribute, if you don't know what to do with it, leave it empty and Elasticsearch will guess what the best mapping is for that field.

It is, as always, a rule of thumb to apply changes to the dev/staging environments before doing so in production. And to my Mozilla colleagues: this is mandatory! Please always apply any change to stage first, verify it works as you want (using Super Search for example), then apply it to production and verify there.

Getting there

To get to the Super Search Fields admin tool, you first need to be logged in as a superuser. Once that's done, you will see a link to the administration in the bottom-right corner of the page.

Fig 1 - Admin link

Clicking that link will get you to the admin home page, where you will find a link to the Super Search Fields page.

Fig 2 - Admin home page

The Super Search Fields page lists all the currently known fields with their attributes.

Fig 3 - Super Search Fields page

Adding a new field

On the Super Search Fields page, click the Create a new field link in the top-right corner. That leads you to a form.

Fig 4 - New field button

Fill all the inputs with the values you need. Note that Name is a unique identifier to this field, but also the name that will be displayed in Super Search. It doesn't have to be the same as Name in database. The current convention is to use the database name but in lower case and with underscores. So for example if your field is named DOMIPCEnabled in the database, we would make the Name something like dom_ipc_enabled.

Use the documentation about our attributes to understand how to fill that form.

Fig 5 - Example data for the new field form

Clicking the Create button might take some time, especially if you filled the Storage mapping attribute. If you did, in the back-end the application will perform a few things to very that this change does not break Elasticsearch indexing. If you get redirected to the Super Search Fields page, that means the operation was successful. Otherwise, an error will be displayed and you will need to press the Back button of your browser and fix the form data.

Note that cache is refreshed whenever you make a change to the list, so you can verify your changes right away by looking at the list.

Editing a field

Find the field you want to edit in the Super Search Fields list, and click the edit icon to the right of that field's row. That will lead you to a form much like the New field one, but prefilled with the current attributes' values of that field. Make the appropriate changes you need, and press the Update button. What applies to the New field form does apply here as well (mapping checks, cache refreshing, etc. ).

Fig 6 - The edit icon

Deleting a field

Find the field you want to edit in the Super Search Fields list, and click the delete icon to the right of that field's row. You will be prompted to confirm your intention. If you are sure about what you're doing, then confirm and you will be done.

Fig 7 - The delete icon

The missing fields tool

We have a tool that looks at all the fields known by Elasticsearch (meaning that Elasticsearch has received at least one document containing that field) and all the fields known in the Super Search Fields, and shows a diff of those. It is a good way to see if you did not forget some key fields that you could use in the app.

To access that list, click the See the list of missing fields just above the Super Search Fields list.

Fig 8 - The missing fields link

The list of missing fields provides a direct link to create the field for each row. It will take you to the New field form with some prefilled values.

Fig 9 - Missing fields page

Conclusion

I think I have covered it all. If not, let me know and I'll adjust this guide. Same goes if you think some things are unclear or poorly explained.

If you find bugs in this Super Search Fields tool, please use Bugzilla to report them. And remember, Socorro is free / "libre" software, so you can also go ahead and fix the bugs yourself! :-)

Doug BelshawA Brief History of Web Literacy and its Future Potential [DMLcentral]

Further to an earlier post on the history of web literacy, I’ve just had an article published at DMLcentral. Entitled A Brief History of Web Literacy and its Future Potential, it weighs in at over 3,000 words – so you might want to sit down with a cup of coffee before starting to read it!

A Brief History of Web Literacy and its Future Potential

After doing some additional research on top of that I did for my thesis, I’ve identified five ‘eras’ of web literacy:

  • 1993-1997: The Information Superhighway
  • 1999-2002: The Wild West
  • 2003-2007: The Web 2.0 era
  • 2008-2012: The Era of the App
  • 2013+: The Post-Snowden era

As I say in the article:

It’s worth noting that what follows is partial, incomplete, focused on the developed, western world, and only a first attempt. I’d be very grateful for comment, pushback, and pointers to other work in this area.

Click here to read the article in full at DMLcentral


Questions? Comments? Please add them to the original article at the link above. Alternatively, email me: doug@mozillafoundation.org

Liz HenryThoughts on working at Mozilla and the Firefox release process

Some thoughts on my life at Mozilla as I head into our company-wide work week here in Portland! My first year at Mozilla I spent managing the huge volume of bugs, updating docs on how to triage incoming bugs and helping out with Bugzilla itself. For my second year I’ve been more closely tied into the Firefox release process, and I switched from being on the A-Team (Automation and Tools) to desktop Firefox QA, following Firefox 31 and then Firefox 34 through their beginning to release.

picard saying if it's not in bugzilla it doesn't exist

I also spent countless hours fooling around with Socorro, filing bugs on the highest volume crash signatures for Firefox, and then updating the bugs and verifying fixes, especially for startup crashes or crashes associated with “my” releases.

Over this process I’ve really enjoyed working with everyone I’ve met online and in person. The constant change of Mozilla environments and the somewhat anarchic processes are completely fascinating. Though sometimes unnerving. I have spent these two years becoming more of a generalist. I have to talk with end users, developers on every Mozilla engineering team, project or product managers if they exist, other QA teams, my own Firefox/Desktop QA team, the people maintaining all the tools that all of us use, release management, and release engineering. Much of the actual QA has been done by our Romanian team so I have coordinated a lot with them and hope to meet them all some day. When I need to dig into a bug or feature and figure out how it works, it means poking around in documentation or in the code or talking with people, then documenting whatever it is.

This is a job (and a workplace) suited for people who can cope with rapid change and shifting ground. Without a very specific focus, expertise is difficult. People complain a lot about this. But I kind of like it! And I’m constantly impressed by how well our processes work and what we produce. There are specific people who I think of as total rock stars of knowing things. Long time experts like bz or dbaron. Or dmajor who does amazing crash investigations. I am in their secret fan club. If there were bugzilla “likes” I would be liking up a storm on there! It’s really amazing how well the various engineering teams work together. As we scramble around to improve things and as we nitpick I want to keep that firmly in mind. And, as we are all a bit burned out from dealing with the issues from the 33 Firefox release and many point releases, then the 10th anniversary release, then last minute scrambling to incorporate surprise/secret changes to Firefox 34 because of our new deal with Yahoo.

I was in a position to see some of the hard work of the execs, product and design people, and engineers as well as going through some rounds of iterating very quickly with them these last 2 weeks (and seeing gavin and lmandel keep track of that rapid pace) Then seeing just part of what the release team needed to do, and knowing that from their perspective they also could see what IT and infrastructure teams had to do in response. So many ripples of teamwork and people thinking things through, discussing, building an ever-changing consensus reality.

I find that for every moment I feel low for not knowing something, or making a mistake (in public no less) there are more moments when I know something no one else knows in a particular context or am able to add something productive because I’m bringing a generalist perspective that includes my years of background as a developer.

As an author and editor I am reminded of what it takes to edit an anthology. My usual role is as a general editor with a vision: putting out a call, inviting people to contribute, working back and forth with authors, tracking all the things necessary (quite a lot to track, more than you’d imagine!) and shifting back and forth from the details of different versions of a story or author bios, to how the different pieces of the book fit together. And in that equation you also, if you’re lucky, have a brilliant and meticulous copy editor. At Aqueduct Press I worked twice with Kath Wilham. I would have gone 10 rounds of nitpick and Kath would still find things wrong. My personal feeling for my past year’s work is that my job as test plan lead on a release has been 75% “editor” and 25% copy editor.

If I had a choice I would always go with the editor and “glue” work rather than the final gatekeeper work. That last moment of signing off on any of the stages of a release freak me out. I’m not enough of a nitpicker and in fact, have zero background in QA — instead, 20 years as a lone wolf (or nearly so) developer, a noisy, somewhat half-assed one who has never *had* QA to work with much less *doing* QA. It’s been extremely interesting, and it has also been part of my goal for working in big teams. The other thing I note from this stressful last 6 weeks is that, consistent with other situations for me, in a temporary intense “crisis” mode my skills shine out. Like with the shifting-every-30-minutes disaster relief landscape, where I had great tactical and logistical skills and ended up as a good leader. My problem is that I can’t sustain that level of awareness, productivity, or activeness, both physically and mentally, for all that long. I go till I drop, crash & burn. The trick is knowing that’s going to happen, communicating it beforehand, and having other people to back you up. The real trick which I hope to improve at is knowing my limits and not crashing and burning at all. On the other hand, I like knowing that in a hard situation I have this ability to tap into. It just isn’t something I can expect to do all the time, and isn’t sustainable. One thing I miss about my “old” life is building actual tools. These last 2 years have been times of building human and institutional infrastructure for me much more than making something more obvious and concrete. I would like to write another book and to build a useful open source tool of some sort. Either for Mozilla, for an anti-harassment tool suite, or for Ingress…. :) Alternatively I have a long term idea about open source hardware project for mobility scooters and powerchairs. That may never happen, so at the least, I’m resolving to write up my outline of what should happen, in case someone else has the energy and time to do it.

In 2013 and 2014 I mentored three interns for the GNOME-OPW project at Mozilla. Thanks so much to Tiziana Selitto, Maja Frydrychowicz, and Francesca Ciceri for being awesome to work with! As well as the entire GNOME-OPW team at Mozilla and beyond. I spoke about the OPW projects this summer at Open Source Bridge and look forward to more work as a mentor and guide in the future.

Meanwhile! I started a nonprofit in my “spare time”! It’s Double Union, a feminist, women-only hacker and maker space in San Francisco and it has around 150 members. I’m so proud of everything that DU has become. And I continue my work on the advisory boards of two other feminist organizations, Ada Initiative and GimpGirl as well as work in the backchannels of Geekfeminism.org. I wrote several articles for Model View Culture this year and advised WisCon on security issues and threat modeling. I read hundreds of books, mostly science fiction, fantasy, and history. I followed the awesome work my partner Danny does with EFF, as well. These things are not just an important part of my life, they also make my work at Mozilla more valuable because I am bringing perspectives from these communities to the table in all my work.

The other day I thought of another analogy for my last year’s work at Mozilla that made me laugh pretty hard. I feel personally like the messy “glue” Perl scripts it used to be my job to write to connect tools and data. Part of that coding landscape was because we didn’t have very good practices or design patterns but part of it I see as inevitable in our field. We need human judgement and routing and communication to make complicated systems work, as well as good processes.

I think for 2015 I will be working more closely with the e10s team as well as keeping on with crash analysis and keeping an eye on the release train.

Mad respect and appreciation to everyone at Mozilla!!

Firefox launch party liz larissa in fox ears

Related posts:

Yunier José Sosa VázquezFirefox Hello, WebIDE y más en la nueva versión de Firefox

Firefox Update

Ya estamos en el último mes del año y antes de que despidamos este genial 2014, Mozilla nos regala una nueva versión de nuestro navegador favorito cargadita de novedades para todo el público en general y en especial, para aquellos que gusten de realizar videollamadas y desarrollar para la Web. Sin dilatar más el artículo, conozca de primera mano las novedades de esta liberación.

Firefox Hello es un cliente de comunicación en tiempo real que te permite comunicarte con tu familia y amigos que quizás no tengan el mismo servicio de video chat, software o hardware como tu. Las llamadas de video y voz son gratis y no necesitarás descargar nada porque todo viene incluido en el navegador. Dando clic en el botón  firefox_hello_icon “burbuja de chat” ubicado en la barra de herramientas de Firefox, estarás listo para conectarte con cualquiera que tenga activado WebRTC en su navegador (Firefox, Chrome u Opera).

FFHELLO-252x354

Hello también permite utilizar el servicio sin tener que crear obligadamente una cuenta, compartiendo el vínculo de la llamada, podrán iniciar la comunicación. La integración de contactos está presente y con ella podrás administrar tus contactos, incluso podrás importalos desde tu cuenta en Google. Si tus contactos tienen una Cuenta Firefox (Firefox Account) y están en línea, entonces puedes “llamarlos” directamente desde Firefox.

Desde ahora cambiar temas/personas es más fácil porque puedes hacerlo directamente en el modo de Personalización de Firefox, al cual accedes desde el Menú menu -> menuPanel-customize Personalizar o escribiendo about:customizing en la barra de direcciones.

Cambiar_tema_persona

WebIDE es una nueva herramienta para desarrollo de aplicaciones dentro del navegador que encontrarás en esta versión. WebIDE te permite crear una nueva aplicación de Firefox OS (que es solo una Web App) desde una plantilla, o abrir el código de una aplicación creada previamente. Desde allí puedes editar los archivos de la aplicación, ejecutarla en el simulador y depurarla con las herramientas de desarrollo. Para abrirlo, selecciona la opción WebIDE desde el menú “Desarrollador web” de Firefox. (leer más documentación)

WebIDE

Para incrementar tu seguridad, todas las búsquedas que realices en Wikipedia desde Firefox ahora usan protocolo seguro (HTTPS), esto permitirá que lo que busques no sea “visto” por terceros (solamente en inglés).

Mientras tanto, para la versión en inglés de Estados Unidos, se estrena una barra de búsqueda mejorada que permite visualizar las sugerencias y cambiar de motor rápidamente. Si este cambio tiene aceptación se podrá ver en las restantes localizaciones de Firefox. Para Rusia, Belarusia y Kasajastán se cambió a Yandex como buscador por defecto, y Yahoo para América del Norte, esto se debe al cumplimiento de los acuerdos firmados hace pocos días por Mozilla con Yandex y Yahoo.

DEMO-one-off-search

Si veías el cartel “Firefox ya está en ejecución”, ya no lo verás más porque se incluyó un modo de recuperación desde un proceso bloqueado de Firefox transparente a ti. También se han implementado el HTTP/2 (borrador 14) y ALPN.

Para Android

  • El tema del navegador ha sido actualizado.
  • Soporte para el reflejo de pestaña (tab mirroring) de Chromecast .
  • Ejecutar por primera vez ha sido rediseñado.
  • Habilitado el soporte para las llaves públicas fijadas.
  • Podrás cambiar a Wifi cuando se produzca un error al cargar una página web.
  • Adicionado el soporte para la cabecera HTTP Prefer:Safe.

Otras novedades

  • WebCrypto: soporte a RSA-OAEP, PBKDF2 y AES-KW.
  • WebCrypto: implementadas las funciones wrapKey y unwrapKey.
  • WebCrypto: se pueden importar/exportar llaves en formato JWK.
  • Añadido el soporte el tipo de dato Symbol de ECMAScript 6.
  • WebCrypto: soporte a ECDH.
  • Desarrollo de las plantillas de cadenas en Java Script .
  • Habilitada la API Device Storage para aplicaciones privilegiadas.
  • Se ha implementado Performance.now() para workers.
  • Posibilidad de resaltar todos los nodos que concuerden con un selector en el Editor de Estilos y el panel Reglas del Inspector.
  • Mejorada la interfaz de usuario del Perfilador.
  • La API del DOM Matches() ha sido implementada.
  • Se ha añadido la función console.table a la consola web.

Si deseas conocer más, puedes leer las notas de lanzamiento (en inglés).

Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Linux, Mac, Windows y Android. Recuerda que para navegar a través de servidores proxy debes modificar la preferencia network.negotiate-auth.allow-insecure-ntlm-v1 a true desde about:config.

Nota: Para Mac y Android se están descargando, cuando termine las publicamos.

Frédéric HarperFirefox OS – HTML for the mobile web at All Things Open

Copyright: Jonathan LeBlanc

Copyright: Jonathan LeBlanc

Every time I speak at a conference, I feel bless to be able to do so. For me, it’s a great opportunity to share my passion as expertise about technology. For All Things Open, I was happy to be part of the amazing list of speakers who love Open Source as much as I do. As for many of the talks I did in the last one year, and a half, I was talking about Firefox OS. I think that there is still a lot of awareness to create about this new operating system: so many people don’t know about the power behind Firefox OS and HTML5. It’s even truer about North America as you cannot go to your local store, and buy a device like you can in some places in Europe, and LATAM.

Since I was in the main room, my talk was recorded. Also, because of that, and that five tracks were available at the same time, my room looks a bit empty (I had about 100 people). In any cases, I got interesting feedbacks about Firefox OS and my talk.

As a habit, I started my recording process, so you have access to another version. The sound is not as good as a professional recording, but you have a better view on the screen (I should mix the two to make the ultimate recording).

Looking forward to speaking at the 2016 edition of ATO!


--
Firefox OS – HTML for the mobile web at All Things Open is a post on Out of Comfort Zone from Frédéric Harper

Patrick McManusFirefox gecko API for HTTP/2 Push

HTTP/2 provides a mechanism for a server to push both requests and responses to connected clients. Up to this point we've used that as a browser cache seeding mechanism. That's pretty neat, it gives you the performance benefits of inlining with better cache granularity and, more importantly, improved priority handling and it does it all transparently.

However, as part of gecko 36 we added a new gecko (i.e. internal firefox and add-on) API called nsIHttpPushListener that allows direct consumption of pushes without waiting for a cache hit. This opens up programming models other than browsing.

A single HTTP/2 stream, likely formed as a long lasting transaction from an XHR, can receive multiple pushed events correlated to it without having to form individual hanging polls for each event. Each event is both a HTTP request and HTTP response and is as arbitrarily expressive as those things can be.

It seems likely any implementation of a new Web based push notification protocol would be built around HTTP/2 pushes and this interface would provide the basis for subscribing and consuming those events.

nsIHttpPushListener is only implemented for HTTP/2. Spdy has a compatible feature set, but we've begun transitioning to the open standard and will likely not evolve the feature set of spdy any futher at this point.

There is no webidl dom access to the feature set yet, that is something that should be standardized across browsers before being made available.

Gervase MarkhamSearch Bugzilla with Yahoo!

The Bugzilla team is aware that there are currently 5 different methods of searching Bugzilla (as explained in yesterday’s presentation) – Instant Search, Simple Search, Advanced Search, Google Search and QuickSearch. It has been argued that this is too many, and that we should simplify the options available – perhaps building a search which is all three of Instant, Simple and Quick, instead of just one of them. Some Bugzilla developers have sympathy with that view.

I, however, having caught the mood of the times, feel that Mozilla is all about choice, and there is still not enough choice in Bugzilla search. Therefore, I have decided to add a sixth option for those who want it. As of today, December 1st, by installing this GreaseMonkey script, you can now search Bugzilla with Yahoo! Search. (To do this, obviously, you will need a copy of GreaseMonkey.) It looks like this:

In the future, I may create a Bugzilla extension which allows users to fill the fourth tab on the search page with the search engine of their choice, perhaps leveraging the OpenSearch standard. Then, you will be able to search Bugzilla using the search engine which provides the best experience in your locale.

Viva choice!

John O'Duinn“APE – How to Publish a Book” by Guy Kawasaki

A few years ago, I first put my toes into the book publishing world by co-writing a portion of AOSAv2 about Mozilla’s RelEng infrastructure. Having never written part of a published book before, I had no idea what I was really getting into. It was a lot of work, in the midst of an already-busy-day-time-job and yet, I found it was strangely quite rewarding. Not financially rewarding – all proceeds from the book went to Amnesty – but rewarding in terms of getting us all to organize our thoughts to write down in a clear, easy to read way, explaining the million-and-one details that “we just knew instinctively”, and hopefully helping spread the word to other software companies on what we did when changing Mozilla’s release cadence.

While working on AOSAv2, clearly explaining the technology was hard work, as expected, but I was surprised by how much work went into “simple” mechanics – merging back reviewer feedback, tracking revisions, dealing with formatting of tables and diagrams, publishing in different formats… and remember, this was a situation where book publisher contracts, revenue and other “messy stuff” was already taken care of by others. I “just” had to write. I was super happy to have the great guidance and support of Greg Wilson and Amy Brown who had been-there-done-that, helped work through all those details, and kept us all on track.

Ever since then, I’ve been considering more writing, but daunted by all the various details above and beyond “just writing”. These blog posts help scratch that itch, in between my own real-life-work-deadlines, but the idea of writing a full book, by myself, still lingered. A while ago, I grabbed “APE: Author, Publisher, Entrepreneur-How to Publish a Book” by Guy Kawasaki. It looked like a good HowTo manual, I’ve enjoyed some of his other books, and he’s always a great presenter, so I was looking forward to some quiet time to read this book cover-to-cover.

This was well worth the time, and I re-read it a few times!

For me, some of the highlights were:

  • why are you writing a book?” I really like how Guy turned this question around to “why would someone else want to read your book”. Excellent mind-flip. I’ve met a few people who want to write a book, and even a few published authors, and I’ve talked with them about my own ideas about writing. But no-one, not one, ever reversed the question like this. It was instantly self-evident to me – it takes time to read a book, and we’re all busy. So, even if someone gave me a book for free, why would I want to skip work and/or social plans to read a book by someone I don’t know. Making it clear, immediately, why someone would find it worthwhile reading your book is a crucial step that I think many people skip past. As the author, keeping this in mind at all times while writing, will help keep you focused on the straight-and-narrow path to writing a book that people would actually want to read.
  • Money: Most publishers are super-secret about their contracts/terms/conditions, which can make a new time author feel like they’re going to be taken (The only exception I know of is Apress, who publish all their terms on their website, with a “no haggling” clause). To help educate potential authors, I respect how much full detail Guy & Shawn gave in small, easy to follow, words.
  • Tell the world you’re writing a book – not that you’re thinking of writing a book.” Again, an excellent mind-flip to help keep you motivated and writing, every single day, whether you want to or not. Also, they provided many links to writer’s clubs (writer support groups!?) who would help you keep motivated.
  • print-on-demand vs print-big-batch: This reminded me of how software release cycles are changing the software industry from old monolith release cadence to rapid-release cadence. “Old way”: a big-bang-release every unpredictable 18months, with a costly big print run, and lots of ways to handle financial risk of under/over selling; any corrections are postponed until the next big-bang-release if it looks like there is enough interest. “New way”: build infrastructure to enable print-on-demand. Do smaller, more frequent, releases, each with small print runs, (almost) no risk of under/over selling, corrections handled frequently and easily. Yes, at first glance, each printed book might seem more expensive this way, but when you factor in the lack-of-under/over selling, removed financial risk, and benefits of frequent updates to the almost-free electronic readers, it actually feels cheaper, more efficient and more appealing to me.
  • In addition to printed books, there’s a good description of pros/cons of the different popular electronic formats (PDF, MOBI, EPUB, DAISY, APK…) as well as related DRM.
  • The differences between ebook publishers (Amazon, Apple, Barnes & Noble, Google, Kobo, …), Author Publisher Services (Lulu, Blurb, Author Solutions, …) and Print-on-Demand (Walkerville Publishing, Lightning Source, …) was detailed and very helpful. Complex chapter, with lots of data, and ending with the reassuring “Don’t obsess about making the wrong choice, however, because most distribution decisions are changeable.”!
  • translations, audiobooks: normally, these are handled as edge cases. Guy & Shawn walk through some of the options (Audible/Amazon, Books-on-Tape/RandomHouse), as well as financial & legal realities.
  • Some fun examples of rejection responses by agents/publishers. My personal favorite was a rejection sent to George Orwell about Animal Farm “It is impossible to sell animal stories in the USA”.

All in all, I found the writing style personal, helpful, direct and super honest. Even the way they ended the book… “Thank you. Now go write a book! —Guy and Shawn”

Thank you both.

Benjamin SmedbergAn Invitation

I’d like to invite my blog readers and Mozilla coworkers to Jesus Christ.

For most Christians, today marks the beginning of Advent, the season of preparation before Christmas. Not only is this a time for personal preparation and prayer while remembering the first coming of Christ as a child, but also a time to prepare the entire world for Christ’s second coming. Christians invite their friends, coworkers, and neighbors to experience Christ’s love and saving power.

I began my journey to Christ through music and choirs. Through these I discovered beauty in the teachings of Christ. There is a unique beauty that comes from combining faith and reason: belief in Christ does not require superstition nor ignorance of history or science. Rather, belief in Christ’s teachings brought me to a wholeness of understanding the truth in all it’s forms, and our own place within it.

Although Jesus is known to Christians as priest, prophet, and king, I have a special and personal devotion to Jesus as king of heaven and earth. The feast of Christ the King at the end of the church year is my personal favorite, and it is a particular focus when I perform and composing music for the Church. I discovered this passion during college; every time I tried to plan my own life, I ended up in confusion or failure, while every time I handed my life over to Christ, I ended up being successful. My friends even got me a rubber stamp which said “How to make God laugh: tell him your plans!” This understanding of Jesus as ruler of my life has led to a profound trust in divine providence and personal guidance in my life. It even led to my becoming involved with Mozilla and eventually becoming a Mozilla employee: I was a church organist and switching careers to become a computer programmer was a leap of faith, given my lack of education.

Making a religious invitation to coworkers and friends at Mozilla is difficult. We spend our time and build our deepest relationships in a setting of on email, video, and online chat, where off-topic discussions are typically out of place. I want to share my experience of Christ with those who may be interested, but I don’t want to offend or upset those who aren’t.

This year, however, presents me with a unique opportunity. Most Mozilla employees will be together for a shared planning week. If you will be there, please feel free to find me during our down time and ask me about my experience of Christ. If you aren’t at the work week, but you still want to talk, I will try to make that work as well! Email me.

1. On Jordan’s bank, the Baptist’s cry
Announces that the Lord is nigh;
Awake, and hearken, for he brings
Glad tidings of the King of kings!

2. Then cleansed be every breast from sin;
Make straight the way for God within;
Prepare we in our hearts a home
Where such a mighty Guest may come.

3. For Thou art our Salvation, Lord,
Our Refuge, and our great Reward.
Without Thy grace we waste away,
Like flowers that wither and decay.

4. To heal the sick stretch out Thine hand,
And bid the fallen sinner stand;
Shine forth, and let Thy light restore
Earth’s own true lovliness once more.

5. Stretch forth thine hand, to heal our sore,
And make us rise to fall no more;
Once more upon thy people shine,
And fill the world with love divine.3

6. All praise, eternal Son, to Thee
Whose advent sets Thy people free,
Whom, with the Father, we adore,
And Holy Ghost, forevermore.

—Charles Coffin, Jordanis oras prævia (1736), Translated from Latin to English by John Chandler, 1837

Jen Fong-AdwentMaking a soundtrack for a non-existent sci-fi film

I used to make a lot of campy IDM and ambient/experimental back in 1999-2007ish or something

Christian HeilmannIt is Blue Beanie Day – let’s reflect #bbd14

Today we celebrate once again blue beanie day. People who build things that are online don their blue hat and show their support for standards based web development. All this goes back to Jeffrey Zeldman’s book that outlined that idea and caused a massive change in the field of web design.

me, wearing my HTML beanie

Let’s celebrate – once again

It feels good to be part of this, it is a tradition and it reminds us of how far we’ve come as a community and as a professional environment. To me, it starts to feel a bit stale though. I get the feeling we are losing our touch to what happens these days and celebrate the same old successes over and over again.

This could be normal disillusionment of having worked in the same field for a long time. It also could be having heard the same messages over and over. I start to wonder if the message of “use web standards” is still having an impact in today’s world.

The web is a commodity

I am not saying they are unnecessary – far from it. I am saying that we lose a lot of new developers to other causes and that web development as a craft is becoming less important than it used to be.

The web is a thing that people use. It is there, it does things. Much like opening a tap gives you water in most places we live in. We don’t think about how the tap works, we just expect it to do so. And we don’t want to listen to anyone who tells us that we need to use a tap in a certain way or we’re “doing it wrong”. We just call someone in when the water doesn’t run.

Standards mattered most when browsers worked against them

When web standards based development became a thing it was an absolute necessity. Browser support was all over the shop and we had to find something we can rely on. That is a standard. You can dismantle and assemble things because there is a standard for screws and screwdrivers. You can also use a knife or a key for that and thus damage the screw and the knife. But who cares as long as the job’s done, right? You do – as soon as you need to disassemble the same thing again.

Far beyond view source

Nowadays our world has changed a lot. Browser support is excellent. Browsers are pretty amazing at displaying complex HTML, CSS and JavaScript. On top of that, browsers are development tools giving us insights into what is happening. This goes beyond the view-source of old which made the web what it is. You can now inspect JavaScript generated code. You an see browser internal structures. You see what loaded when and how the browser performs. You can inspect canvas, WebGL and WebAudio. You can inspect browsers on connected devices and simulate devices and various connectivity scenarios.

All this and the fact that the HTML5 parser is forgiving and fixes minor markup glitches makes our chant for web standards support seem redundant. We’ve won. The enemies of old – Flash and other non-standard technologies seem to be forgotten. What’s there to celebrate?

Our standards, right or wrong?

Well, the struggle for a standards based web is far from over and at times we need to do things we don’t like doing. An open source browser like Firefox having to support DRM in video playback is not good. But it is better than punishing its users by preventing them from using massively successful services like Netflix. Or is it? Should our goal to only support open and standardised technology be the final decision? Or is it still up to us to show that open and standardised means the solutions are better in the long run and let that one slip for now? I’m not sure, but I know that it is easier to influence something when you don’t condemn it.

A new, self-made struggle

All in all there is a new target for those of us who count themselves in the blue beanie camp: complexity and “de-facto standards”.

The web grew to what it is now as it was simple to create for it. Take a text editor, write some code, open it in a browser and you’re done. These days professional web development looks much different. We rely on package managers. We rely on resource managers. We use task runners and pre-processing to create HTML, CSS and JavaScript solutions. All these tools are useful and can make a massive difference in a big and complex site. They should not be a necessity and are often overkill for the final product though. Web standards based development means one thing: you know what you’re doing and what your code should do in a supported browser. Adding these layers adds a layer of dark magic to that. Instead of teaching newcomers how to create, we teach them to rely on things they don’t understand. This is a perfectly OK way to deliver products, but it sets a strange tone for those learning our craft. We don’t empower builders, we empower users of solutions to build bigger solutions. And with that, we create a lot of extra code that goes on the web.

A “de-facto standard” is nonsense. The argument that something becomes good and sensible because a lot of people use it assumes a lot. Do these people use it because they need it? Or because they like it? Or because it is fashionable to use? Or because it yields quick results? Results that in a few months time are “considered dangerous” but stick around for eternity as the product has been shipped.

Framing the new world of web development

We who don the blue hats live in a huge echo chamber. It is time to stop repeating the same messages and concentrate on educating again. The web is obese, solutions become formulaic (parallax scrollers, huge hero headers…). There is a whole new range of frameworks to replace HTML, CSS and JavaScript out there that people use. Our job as the fans of standards is to influence those. We should make sure we don’t go towards a web that is dependent on the decisions of a few companies. Promises of evergreen support for those frameworks ring hollow. It happened with YUI - a very important player in making web standards based work scale to huge company size. And it can happen to anything we now promote as “the easier way to apply standards”.

David Rajchenbach TellerVous souhaitez apprendre à développer des Logiciels Libres ?

Cette année, la Communauté Mozilla propose à Paris un cycle de Cours/TDs autour du Développement de Logiciels Libres.

Au programme :

  • comment se joindre à un projet existant ;
  • comment communiquer dans une équipe distribuée ;
  • comment financer un projet de logiciel libre ;
  • qualité du code ;
  • du code !
  • (et beaucoup plus).

Pour plus de détails, et pour vous inscrire, tout est ici.

Attention, les cours commencent le 8 décembre !


Doug BelshawToward The Development of a Web Literacy Map: Exploring, Building, and Connecting Online

LRA slides

I’m presenting at the Literacy Research Association conference next Friday. I got some useful feedback after my previous post so this is pretty much the version I’m going to present. The slides are above (modern web browser with fast JavaScript performance required!)

Introduction

Hi everyone, and thanks to Ian for the introduction. I’m really glad to be here - it’s my first time in Florida and, although I’ll only be here for about 46 hours, I plan to make full use of the amount of sunshine. I come from the frozen wastelands of northern England where most of us have skin like ‘Gollum’ from Lord of the Rings. Portland, Oregon - where I’ve just come from a Mozilla work week - was actually colder than where I live!

But, seriously, I really appreciate the opportunity to talk to you about something that’s really important to the Mozilla community - web literacy. It’s a topic I don’t think has been given enough thought and attention, and I’d like to use the brief time I’ve got here to convince you to help us rectify that. I’ll show you this quickly - the competency grid from v1.1 of the Web Literacy Map but I want to give some background before diving too much into that.

I’m a big fan of Howard Rheingold’s work, and he talks about 'literacies of attention’. It seems appropriate, therefore, to tell you what I’m going to cover and to front-load this presentation with the conclusions I’m going to make. That way you can process what I’m trying to get across while the caffeine’s still coursing through your veins.

I was always taught to say what you’re going to say, say it, and then say what you’ve said. So my conclusions, the things you should pay attention to, are the following:

  1. Web literacy is a useful focus / research area
  2. We should work together instead of building endless competing frameworks
  3. There’s a need to balance rigour and grokkability

Given the looks on some people’s faces, I should probably just say quickly that 'grok’ is a real word! The Oxford English Dictionary defines it as, 'to understand intuitively or by empathy; to establish rapport with.’ The Urban Dictionary, meanwhile, defines it as, 'literally meaning 'to drink’ but taken to mean 'understanding.’ Often used by programmers and other assorted geeks.’ You should probably grok Mozilla first, as it seems a bit odd to have some corporate shill from a browser company at the Literacy Research Association conference, no?

Well, that’s the thing. Every part of that sentence is incorrect. First, Mozilla isn’t a company, it’s a global non-profit. Second, Mozilla is not just about the half a billion people who use Firefox, but about a mission to promote openness, innovation & opportunity on the Web. And third, no-one’s selling anything here. Instead, it’s an invitation to do the work you were going to do anyway, but share and build with others for the benefit of mankind. Ian is a Mozillian who works in academia, as was I before I became a paid contributor. We have Mozillians in all walks of life, from engineers to teachers, and in every country of the world. It’s a global community that also makes products instantiating our mission and values.

Also, we work in the open and make everything we do available under open licenses. You’re free to rip and remix.

OK, so I think it was important to say that up front.

Web literacy is a useful focus / research area

Let’s start with web literacy as a useful focus / research area. I wrote my thesis on digital literacies and if there’s one thing that I learned it’s that there’s as many definitions of 'digital literacy’ as there are researchers in the field! Why on earth, then, would we need another term to endlessly redefine and argue about? Well, I’d argue that the good thing about the web is that it’s easier to agree what we’re actually talking about. Yes, there may be some people who use the term 'web’ when they actually mean 'internet’ but, by and large we all know what we’re talking about.

As well as being something most people know about, it’s also ubiquitous. If you have access to the internet, then you almost always also have access to the web. That’s not true of other digital spaces where walled gardens are the norm. I’m sure there are very specific skills, competencies and habits of mind you need to use locked-down, proprietary products. And that’s great. But I think a better use of our time is thinking about the skills, competencies and habits of mind required to use a public good. To use an imperfect analogy, we don’t teach people to drive specific cars but give them a license to drive pretty much any car.

Web literacy is also an important research area because it’s political. Take the live issue of 'net neutrality’. To recap, this is “the principle that Internet service providers and governments should treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication” (Wikipedia). While this may seem somewhat esoteric and distant, it’s a core part of web literacy. Just as Paolo Freire and others have seen literacy as a hugely emancipatory and liberating force for social change, so too web literacy is a force for good.

One response I get when I talk about web literacy is, “isn’t that covered by information literacy?” or “I’m sure what you describe is just digital media literacy.” And maybe it is. Let’s have a discussion. But before we do, I will note how fond researchers are of what I call 'umbrella terms’. So they conceive digital literacy as including media literacy and information literacy. Another thinks media literacy includes information literacy and digital literacy. And a third believes information literacy to include digital literacy and media literacy. And so on.

Perhaps the clearest thinking in recent times around new literacies has been provided by Colin Lankshear and Michele Knobel. They write in a clear, lucid way that makes sense to researchers and practitioners alike. They’re a good example of what I want to talk about later in terms of balancing rigour and grokkability. I’m particularly fond of this quotation from the introductory chapter to their New Literacies Sampler. Apologies for the lengthy quotation, but I think it’s important:

Briefly, then, we would argue that the more a literacy practice can be seen to reflect the characteristics of the insider mindset and, in particular, those qualities addressed here currently being associated with the concept of Web 2.0, the more it is entitled to be regarded as a new literacy. That is to say, the more a literacy practice privileges participation over publishing, distributed expertise over centralized expertise, collective intelligence over individual possessive intelligence, collaboration over individuated authorship, dispersion over scarcity, sharing over ownership, experimentation over “normalization,” innovation and evolution over stability and fixity, creative-innovative rule breaking over generic purity and policing, relationship over information broadcast, and so on, the more we should regard it as a “new” literacy. New technologies enable and enhance these practices, often in ways that are stunning in their sophistication and breathtaking in their scale. Paradigm cases of new literacies are constituted by “new technical stuff ” as well as “new ethos stuff.”

Now if what they describe in this quotation doesn’t describe the web and web literacy, then I don’t know what does!

I’ve been working recently on a brief history of web literacy. That was published earlier this week over at DMLcentral, so do go and have a read. I’d appreciate your insights, comments and pushback. In the article I loosely identified five 'eras’ of web literacy:

  • 1993-1997: The Information Superhighway
  • 1999-2002: The Wild West
  • 2003-2007: The Web 2.0 era
  • 2008-2012: The Era of the App
  • 2013+: The Post-Snowden era

I haven’t got time to dive into this here, but it’s worth noting a couple of things. One, not everyone gives the name 'web literacy’ to the skills required to use the web. And, two, this isn’t a linear progression. For example, I’d argue that we’re entering a time when popular opinion realises that these skills need to be taught; they’re not innate nor just a result of immersion and use.

We should work together instead of building endless competing frameworks

So far, I haven’t defined web literacy. I’ve hinted at it by talking about skills, competencies and habits of mind but I haven’t introduced one definition to rule them all. Why is that? Well, as I mentioned before, there’s a lot of definitions out there. And definitions are powerful things. They can constrain what is in and out of scope. They can give some people a voice while silencing others. They can privilege certain ways of being above others. Given that digital skills are currency in the jobs market, definitions can have economic effects too.

Here’s how the Mozilla community currently defines web literacy:

the skills and competencies needed for reading, writing and participating on the web.

We hope that this definition is broad enough to be inclusive but specific enough to be able to do the work required of it. But if you want to change it, you’re welcome to - come along to one of the community calls, file a 'bug’ in Bugzilla, start a conversation thread in the Teach The Web discussion forum. We work open.

Let me explain what that means and what it looks like.

If I make something (say, a framework) and write a paper about it, then you could adopt it wholesale. You could. But what’s more likely is that you’d want to put your own stamp on it. You’d want to 'remix’ it to include things that might have been missed or neglected. In software development terms these are known as 'unmerged forks’. In other words, you’ve taken something, changed it, and then started promoting that new thing. Meanwhile, the original is still kicking around somewhere. Multiply this many times and you’ve got a recipe for confusion and chaos.

Instead, what if we merged those changes? What if we discussed them in a democratic and open way? And what if there was a global non-profit as a steward for the process? What I’ve described applies to the World Wide Web Consortium (usually abbreviated to W3C) which is the main international standards organization for the web. But it also describes something that we’ve defined and continue to evolve within the Mozilla community: the Web Literacy Map.

The Web Literacy Map v1.1 is currently localised in full or in part in 22 languages. This is done by an army of volunteers, some of whom have been part of the discussions leading to the map, some not. It forms the core of the Boys and Girls Clubs of America’s new digital strategy. The University of British Columbia use it for student onboarding. And there are many organisations using it as a 'sense check’ for their curricula, schemes of work, and rubrics. (Quite how many, we’re not entirely sure as it’s an openly-licensed project.)

Interestingly, the shift from calling it a Web Literacy 'Standard’ in 2013 to calling it a Web Literacy 'Map’ in 2014 seems to have slightly decreased its popularity in formal education, but increased its popularity elsewhere. The decision to do this came after we had feedback that, particularly in the US, 'standard’ was a problematic term that came with baggage. These cultural differences are interesting - for example 'standard’ doesn’t particularly have positive or negative connotations in most of Europe, as far as I can tell. Another example would be from Alvar Maciel, an Argentinian teacher and technology integrator. He informed us on one community call that while the translation of 'competence’ makes literal sense in Argentinian Spanish, because of the association and baggage it comes with, educators would avoid it.

At the same time, the Web Literacy Map exists for a particular purpose. That purpose is to underpin Mozilla’s Webmaker program. Webmaker is an attempt to give people the knowledge, skills and confidence to 'teach the web’. In other words, to train the trainers helping others with web literacy. The focus of the community building the Web Literacy Map has these people in mind and, because Webmaker is a global program, difference, diversity and nuance is welcome.

So far, this all sounds very edifying and unproblematic. Like any project, it’s not without issues. Perhaps the biggest, especially now version 1.1 is out the door, is participation and contribution. Although there are core contributors - like Ian and Greg and a few others. But a good number of others are episodic volunteers. By and large these are people who know the domain - researchers, teachers, consultants, industry experts. Their occasional contributions are great, but it can be difficult when we have to explain why decisions were taken - sometimes quite a while ago. At the same time, new blood can mix things up and force us to question what went before.

Let’s use the current development of what for the moment we’re calling Web Literacy Map v2.0. Here’s how we’ve proceeded so far. First off, I decided that if we were going to fulfil our promise to update the map as the web evolves, we should probably review it on a yearly basis. Back in August I approached people - mainly in my networks, mainly people who know the space - to ask if they’d like to be interviewed. I can’t think of anyone who said no. The questions I asked to loosely structure the recorded half-hour conversations were:

  1. Are you currently using the Web Literacy Map (v1.1)? In what kind of context?
  2. What kinds of contexts would you like to use an updated (v2.0) version of the Web Literacy Map?
  3. What does the Web Literacy Map do well?
  4. What’s missing from the Web Literacy Map?
  5. Who would you like to see use/adopt the Web Literacy Map?

I also gave them a chance to say things that didn’t seem to fit in elsewhere. Sometimes I asked the questions in a slightly different order. Sometimes I fed in ideas from previous interviewees.

From those interviews I identified around 21 emerging themes for things that people would like to see from a version 2.0 of the Web Literacy Map. I boiled these down to five that would help us define the scope of our work. I formed them into proposals for a web-based community survey. This, following demand from the community, was translated from English into five other languages. The five proposals were:

  • Proposal 1: “I believe the Web Literacy Map should explicitly reference the Mozilla manifesto.”
  • Proposal 2: “I believe the three strands should be renamed 'Reading’, 'Writing’ and 'Participating’.”
  • Proposal 3: “I believe the Web Literacy Map should look more like a 'map’.”
  • Proposal 4: “I believe that concepts such as 'Mobile’, 'Identity’, and 'Protecting’ should be represented as cross-cutting themes in the Web Literacy Map.”
  • Proposal 5: “I believe a 'remix’ button should allow me to remix the Web Literacy Map for my community and context.”

Every question on the survey was optional. Respondents could indicate agreement with the proposal on a five-point scale and add a comment if they wished. We received 177 responses altogether. Some chose to remain anonymous, which is fine. The important thing is that almost every respondent completed all of the survey.

From that I proposed a series of seven community calls. There was an introductory call, we’re towards the end of separate calls discussing each of the proposals, and then we’ll conclude just before Christmas. This will help decide what’s in and out of scope so we can hit the ground running in 2015.

This is a microcosm of how we developed what was then called the Web Literacy Standard from 2012 onwards. Back then, we did some preliminary work and then published a whitepaper. We invited lots of people to a kick-off call and decided how to proceed. Once we decided what was in and out of scope, we dug into some of the complexity. The Mozilla Festival seemed like a good place to launch the first version, so we set ourselves September 2013 as the deadline. This involved some 'half-hour hackfests’ where a few of us focused on getting certain sections finished and ready for review.

It’s important to note that we all have skills in different areas. For example, Carla Casilli - who’s now at the Badge Alliance and who worked closely with me on this - has a real gift for naming things. Ian’s particularly good at practicalities and bringing us back down to earth. If it takes a village to raise a child, it takes a community to create a Web Literacy Map!

There’s a need to balance rigour and grokkability

So my third and final point is that we need to balance rigour and grokkability. Just as I’d happily argue until the cows come home about the necessity for that 'u’ in 'rigour’, so I’d be happy to get stuck into philosophical discussions about literacy. Seriously, grab me later today if you want a conversation about Pierce’s theory of signs or Empson on ambiguity. I’m definitely the person at Mozilla who’s most likely to say:

That’s all very well in practice, but how does it work in theory? (Garret FitzGerald)

However, that’s not always a great approach. I’ve learned that perfect is the enemy of good. We need to balance both, because:

Theory without practice is empty; practice without theory is blind. (Immanuel Kant)

If you like your thinkers more revolutionary than conservative, I’m also fond of the quotation on Karl Marx’s tomb:

The philosophers have only interpreted the world, in various ways. The point, however, is to change it.

So how do we do that? We invite everyone in. We care about the outcome more than about individual contributions. After all, given enough eyes, all bugs are shallow.

I think we also need to think about what 'rigour’ means. Grokkability is easy enough - we put what we’ve produced in front of people and see how they respond. But rigour is trickier. It depends not on the start of the journey but on the end of it. Does what we’ve produced lead to the outcomes we want?

With the Web Literacy Map, the outcomes we want are that people improve in their ability to read, write and participate on the web. We’ve intentionally used verbs with the skills we’ve listed, as we don’t want this to be 'head’ knowledge. It’s not much use just being able to pass a pencil-and-paper test. Applicability is everything. Literacy means a change in identity.

So we come to the dreaded problem of measurement. I said that grokkability is understanding things at the start of the journey and rigour getting the right outcomes at the end. Here’s the point at which it gets very interesting. The easy thing would be to throw our hands up in the air and say that we’re only providing the raw materials from which others can build activities, learning pathways and assessments. And to some extent that is the scope of the Web Literacy Map. It’s kind of infused with Mozilla’s mission but anyone can use it and contribute to it.

It’s outside the scope of this talk, really, but I thought I’d just point to some things my team is doing in the future. First, we want to build clear learning pathways that lead to meaningful credentials. People should be able to show what they know and can do with the web. That’s likely to start with Web Literacy Basics 101 and will probably use Open Badges. Second, we want to encourage mentors and leaders within the community. We’re gong to do this through what we’re currently calling 'Webmaker Clubs’. These are best understood as people coming together to learn and teach the web. Third, we want to focus on mobile - both in terms of devices and the mobility of the learner. This is particularly important in areas of the world where people are experiencing the web for the first time, and doing so on a mobile device. Finally, and tentatively, we want to use 'learning analytics’ to find out the best ways in which we can teach these skills.

If you’d like to help us with any of that, you can.

Get involved!

I’m really looking forward to finding out what you all think about what I’ve discussed here. If you’d like to get involved, that’s great. There’s a canonical URL to bookmark that will take you to the correct place on the Mozilla wiki: http://bit.ly/weblitmapv2.

We’ve got a couple more community calls before Christmas and then there’ll be some in the new year. I also invite you to contribute even if you can’t make the calls. I’m happy to begin that process by email, but after a couple of exchanges I’ll probably invite you to work openly by posting to the Teach The Web discussion forum.

So, that’s pretty much it from me. Please do ask me hard questions and push back as hard as you can. It helps all of us sharpen our thinking and means we put the best stuff out there that we can!


Comments? Questions? Email me: doug@mozillafoundation.org

Roberto A. VitilloA Telemetry API for Spark

Check out my previous post about Spark and Telemetry data if you want to find out what all the fuzz is about Spark. This post is a step by step guide on how to run a Spark job on AWS and use our simple Scala API to load Telemetry data.

The first step is to start a machine on AWS using Mark Reid’s nifty dashboard, in his own words:

  1. Visit the analysis provisioning dashboard at telemetry-dash.mozilla.org and sign in using Persona (with an @mozilla.com email address as mentioned above).
  2. Click “Launch an ad-hoc analysis worker”.
  3. Enter some details. The “Server Name” field should be a short descriptive name, something like “mreid chromehangs analysis” is good. Upload your SSH public key (this allows you to log in to the server once it’s started up).
  4. Click “Submit”.
  5. A Ubuntu machine will be started up on Amazon’s EC2 infrastructure. Once it’s ready, you can SSH in and run your analysis job. Reload the webpage after a couple of minutes to get the full SSH command you’ll use to log in.

Now connect to the machine and clone my starter project template:

git clone https://github.com/vitillo/mozilla-telemetry-spark.git
cd mozilla-telemetry-spark && source aws/setup.sh

The setup script will install Oracle’s JDK among some other bits. Now we are finally ready to give Spark a spin by launching:

sbt run

The command will run a simple job that computes the Operating System distribution for a small number of pings. It will take a while to complete as sbt, an interactive build tool for Scala, is downloading all required dependencies for Spark on the first run.

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

import org.json4s._
import org.json4s.jackson.JsonMethods._

import Mozilla.Telemetry._

object Analysis{
  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("mozilla-telemetry").setMaster("local[*]")
    implicit val sc = new SparkContext(conf)
    implicit lazy val formats = DefaultFormats

    val pings = Pings("Firefox", "nightly", "36.0a1", "*", ("20141109", "20141110")).RDD(0.1)

    var osdistribution = pings.map(line => {
      ((parse(line.substring(37)) \ "info" \ "OS").extract[String], 1)
    }).reduceByKey(_+_).collect

    println("OS distribution:")
    osdistribution.map(println)

    sc.stop()
  }
}

If it the job completed successfully, you should see something like this in the output:

OS distribution:
(WINNT,4421)
(Darwin,38)
(Linux,271)

To start writing your job, simply customize src/main/scala/main.scala to your needs. The Telemetry Scala API allows you to define a RDD for a Telemetry filter:

val pings = Pings("Firefox", "nightly", "36.0a1", "*", ("20141109", "20141110")).RDD(0.1)

The above statement retrieves a sample of 10% of all Telemetry submissions of Firefox received on the 9th and 10th of November for any build-id of nightly 36. The last 2 parameters of Pings, i.e. build-id and date, accept either a single value or a tuple specifying a range.

If you are interested to learn more, there is going to be a Spark & Telemetry tutorial at MozLandia next week! I will briefly go over the data layout of Telemetry and how Spark works under the hood and finally jump in a hands-on interactive analysis session with real data. No prerequisites are required in terms of Telemetry, Spark or distributed computing.

Time: Friday, December 5, 2014, 4:00:00 PM – 5:30:00 PM GMT -05:00
Location: Belmont Room, Marriott Waterfront 2nd, Mozlandia


Mozilla Reps CommunityReps Weekly Call – November 27th 2014

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

cool-fox

Summary

  • End of the year – Metrics and Receipts.
  • Reminder: Vouch and vouched on Mozillians.
  • Community PR survey.
  • AdaCamp.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Gervase MarkhamBugzilla for Humans, II

In 2010, johnath did a very popular video introducing people to Bugzilla, called “Bugzilla for Humans“. While age has been kind to johnath, it has been less kind to his video, which now contains several screenshots and bits of dialogue which are out of date. And, being a video featuring a single presenter, it is somewhat difficult to “patch” it.

Enter Popcorn Maker, the Mozilla Foundation’s multimedia presentation creation tool. I have written a script for a replacement presentation, voiced it up, and used Popcorn Maker to put it together. It’s branded as being in the “Understanding Mozilla” series, as a sequel to “Understanding Mozilla: Communications” which I made last year.

So, I present “Understanding Mozilla: Bugzilla“, an 8.5 minute introduction to Bugzilla as we use it here in the Mozilla project:

Because it’s a Popcorn presentation, it can be remixed. So if the instructions ever change, or Bugzilla looks different, new screenshots can be patched in or erroneous sections removed. It’s not trivial to seamlessly patch my voiceover unless you get me to do it, but it’s still much more possible than patching a video. (In fact, the current version contains a voice patch.) It can also be localized – the script is available, and someone could translate it into another language, voice it up, and then remix the presentation and adjust the transitions accordingly.

Props go to the Popcorn team for making such a great tool, and the Developer Tools team for Responsive Design View and the Screenshot button, which makes it trivial to reel off a series of screenshots of a website in a particular custom size/shape format without any need for editing.

Doug BelshawOn the denotative nature of programming

This is just a quick post almost as a placeholder for further thinking. I was listening to the latest episode of Spark! on CBC Radio about Cracking the code of beauty to find the beauty of code. Vikram Chandra is a fiction author as well as a programmer and was talking about the difference between the two mediums.

It’s definitely worth a listen [MP3]

The thing that struck me was the (perhaps obvious) insight that when writing code you have to be as denotative as possible. That is to say ambiguity is a bad thing leading to imprecision, bugs, and hard-to-read code. That’s not the case with fiction, which relies on connotation.

This reminded me of a paper I wrote a couple of years ago with my thesis supervisor about a ‘continuum of ambiguity’. In it, we talk about the overlap between the denotative and connotative aspects of a word, term, or phrase being the space in which ambiguity occurs. For everything other than code, it would appear, this is the interesting and creative space.

I’ve recently updated the paper to merge comments from the 'peer review’ I did with people in my network. I also tidied it up a bit and made it look a bit nicer.

Read it here: Digital literacy, digital natives, and the continuum of ambiguity


Comments? Questions? Email me: doug@mozillafoundation.org

Soledad PenadesPublishing a Firefox add-on without using addons.mozilla.org

A couple of days ago Tom Dale published a post detailing the issues the Ember team are having with getting the Ember Inspector add-on reviewed and approved.

It left me wondering if there would not be any other way to publish add-ons on a different site. Knowing Mozilla, it would be very weird if add-ons were “hardcoded” and tied only and exclusively to a mozilla.org property.

So I asked. And I got answers. The answer is: yes, you can publish your add-on anywhere, and yes your add-on can get the benefit of automatic updates too. There are a couple of things you need to do, but it is entirely feasible.

First, you need to host your add-on using HTTPS or “all sorts of problems will happen”.

Second: the manifest inside the add-on must have a field pointing to an update file. This field is called the updateURL, and here’s an example from the very own Firefox OS simulator source code. Snippet for posterity:

<em:updateURL>@ADDON_UPDATE_URL@</em:updateURL>

You could have some sort of “template” file to generate the actual manifest at build time–you already have some build step that creates the xpi file for the add-on anyway, so it’s a matter of creating this little file.

And you also have to create the update.rdf file which is what the browser will be looking at somewhat periodically to see if there’s an update. Think of that as an RSS feed that the browser subscribes to ;-)

Here’s, again, an example of how an update.rdf file looks like, taken from one of the Firefox OS simulators:

<?xml version="1.0" encoding="utf-8"?>
<RDF xmlns="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:em="http://www.mozilla.org/2004/em-rdf#">
<Description about="urn:mozilla:extension:fxos_2_2_simulator@mozilla.org">
<em:updates>
<Seq><li>
<Description>
  <em:version>2.2.20141123</em:version>
  <em:targetApplication>
  <Description>
    <em:id>{ec8030f7-c20a-464f-9b0e-13a3a9e97384}</em:id>
    <em:minVersion>19.0</em:minVersion>
    <em:maxVersion>40.*</em:maxVersion>
    <em:updateLink>https://ftp.mozilla.org/pub/mozilla.org/labs/fxos-simulator/2.2/mac64/fxos-simulator-2.2.20141123-mac64.xpi</em:updateLink>
  </Description>
  </em:targetApplication>
</Description>
</li></Seq>
</em:updates>
</Description>
</RDF>

And again this file could be generated at build time and perhaps checked in the repo along with the xpi file containing the add-on itself, and served using github pages which do allow serving https.

The Firefox OS simulators are a fine example of add-ons that you can install, get automatic updates for, and are not hosted in addons.mozilla.org.

Hope this helps.

Thanks to Ryan Stinnett and Alex Poirot for their information-rich answers to my questions–they made this post possible!

flattr this!

Benoit GirardImproving Layer Dump Visualization

I’ve blogged before about adding a feature to visualize platforms log dumps including the layer tree. This week while working on bug 1097941 I had no idea which module the bug was coming from. I used this opportunity to improve the layer visualization features hoping that it would help me identify the bug. Here are the results (working for both desktop and mobile):

Layer Tree Visualization Demo
Layer Tree Visualization Demo – Maximize me

This tools works by parsing the output of layers.dump and layers.dump-texture (not yet landed). I reconstruct the data as DOM nodes which can quite trivially support the features of a layers tree because layers tree are designed to be mapped from CSS. From there some javascript or the browser devtools can be used to inspect the tree. In my case all I had to do was locat from which layer my bad texture data was coming from: 0xAC5F2C00.

If you want to give it a spin just copy this pastebin and paste it here and hit ‘Parse’. Note: I don’t intend to keep backwards compatibility with this format so this pastebin may break after I go through the review for the new layers.dump-texture format.


Yura ZenevichResources For Contributing to Firefox OS Accessibility.

Resources For Contributing to Firefox OS Accessibility.

28 Nov 2014 - Toronto, ON

I believe when contributing to Firefox OS and Gaia, just like with most open source projects, a lot of attention should be given to reducing the barrier for entry for new contributors. It is even more vital for Gaia since it is an extremely fast moving project and the number of things to keep track of is overwhelming. In an attempt to make it easier to start contributing to Firefox OS Accessibility I compiled the following list of resources, that I will try keeping up to date. It should be helpful for a successful entry into the project:

Firstly, links to high level documentation:


Git

Gaia project is hosted on Github and the version control that is used is Git. Here's a link to the project source code:

https://github.com/mozilla-b2g/gaia/

One of my coworkers (James Burke) proposed the following workflow that you might find useful (link to source):

  • Fork Gaia using Github UI (I will use my Github user name - yzen as an example)
  • From your fork of Gaia (https://github.com/yzen/gaia), clone that locally, and set up a "remote" that is called "upstream" that points to the mozilla-b2g/gaia repo:
    git clone --recursive git@github.com:yzen/gaia.git gaia-yzen
    cd gaia-yzen
    git remote add upstream git@github.com:mozilla-b2g/gaia.git
  • For each bug you are working on, create a branch to work on it. This branch will be used for the pull request when you are ready. So taking this bug number 123 as an example, and assuming you are starting in your clone's master branch in the project directory:
    # this updates your local master to match mozilla-b2g's latest master
    # you should always do this to give better odds your change will work
    # with the latest master state for when the pull request is merged
    git pull upstream master

    # this updates your fork on github's master to match
    git push origin master

    # Create bug-specific branch based on current state of master
    git checkout -b bug-123
  • Now you will be in the bug-123 branch locally, and its contents will look the same as the master branch. The idea with bug-specific branches is that you keep your master branch pristine and only matching what is in the official mozilla-b2g branch. No other local changes. This can be useful for comparisons or rebasing.

  • Do the changes in relation to the bug you are working on.

  • Commit the change to the branch and then push the branch to your fork. For the commit message, you can just copy in the title of the bug:

    git commit -am "Bug 123 - this is the summary of changes."
    git push origin bug-123
  • Now you can go to https://github.com/yzen/gaia and do the pull request.

  • In the course of the review, if you need to do other commits to the branch for review feedback, once it is all reviewed, you can flatten all the commits into one commit, then force push the change to your branch. I normally use rebase -i for this. So, in the gaia-yzen directory, while you are in the bug-123, you can run:

    git rebase -i upstream/master

At this point, git gives you a way to edit all the commits. I normally 'pick' the first one, then choose 's' for squash for the rest, so the rest of the commits are squashed to the first picked commit.

Once that is done and git is happy, you can the force push the new state of the branch back to GitHub:

    git push -f origin bug-123

More resources at:


Source Code

  • All apps are located in apps/ directory. Each up is located within its own directory. So for example if you are working on a Calendar app you would be making your changes in the apps/calendar directory.

  • The way we want to make sure that the improvements that we work on actually help Firefox OS accessibility and do not regress we have a policy of adding gaia-ui python Marionette tests for all new accessible functionality. You can find tests in the tests/python/gaia-ui-tests/gaiatest/tests/accessibility/ directory.

More resources at:


Building and Running Gaia


Testing


Localization

Localization is very relevant to accessibility especially because one of the tasks that we perform when making something accessible is ensuring that all elements in the applications are labeled for the user of assistive technologies. Please see Localization best practices for guidelines on how to add new text to applications.


Debugging


Using a screen reader

Using a device or navigating a web application is different with the screen reader. Screen reader introduces a concept of virtual cursor (or focus) that represents screen reader's current position inside the app or web page. For mode information and example videos please see: Screen Reader


Accessibility

Here are some of the basic resources to help you get to know what mobile accessibility (and accessibility) is:

yzen

Kevin NgoAdelheid, an Interactive Photocentric Storybook

Photos scroll along the bottom, pages slide left and right.

Half a year ago, I built an interactive photocentric storybook as a gift to my girlfriend for our anniversary. It binds photos, writing, music, and animation together into an experential walk down memory lane. I named it Adelheid, a long-form version of my girlfriend's name. And it took me about a month of my after-work free time whenever she wasn't around. Adelheid is an amalgamation of my thoughts as it molds my joy of photography, writing, and web development into an elegantly-bound package.

storybook

A preview of the personal storybook I put together.

Design Process

As before, I wanted it to a representation of myself: photography, writing, web development. I spent time sketching it out on a notebook and came up with this. The storybook is divided into chapters. Chapters consist of a song, summary text, a key photo, other photos, and moments. Moments are like subchapters; they consist of text and a key photo. Chapters slide left and right like pages in a book, photos roll through the bottom like an image reel, moments lie behind the chapters like the back of a notecard, all while music plays in the background. Then I put in a title page at the beginning that lifts like a stage curtain.

It took a month of work to bring it to fruition, and it was at last unveiled as a surprise on a quiet night at Picnic Island Park in Tampa, Florida.

Technical Bits

With all of the large image and audio files, it becomes quite a large app. My private storybook contains about 110MB, as a single-page app! Well, that's quite ludicrous. However, I made it easy for myself and had it intended to only be used as a packaged app. This means I don't have to worry about load times over a web server since all assets can be downloaded and installed as a desktop app.

Unfortunately, it currently only works well in Firefox. Chrome was targeted initially but was soon dropped to decrease maintenance time and hit my deadline. There's a lot of fancy animation going on, and it was difficult to get it working properly in both browsers. Not only for CSS compatability, but it currently only works as a packaged app for Firefox. Packaged apps have not been standardized, and I only configured packaged app manifests for Firefox's specifications.

After the whole thing, I became a bit more adept at CSS3 animations. This included the chapter turns, image reels, and moment flips. Some nice touches were parallaxed images so the key images transitioned a bit slower to give off a three-dimensional effect. Also the audio faded in and out between chapter turns using a web audio library.

You can install the demo app at adelheid.ngokevin.com.

Christian HeilmannWhat if everything is awesome?

this kid is awesome

These are the notes for my talk at Codemotion Madrid this year.
You can watch the screencast on YouTube and you can check out the slides at Slideshare.

An incredibly silly movie

The other day I watched Pacific Rim and was baffled by the awesomeness and the awesome inanity of this movie.

Let’s recap a bit:

  • There is a rift to another dimension under water that lets alien monsters escape into our world.
  • These monsters attack our cities and kill people. That is bad.
  • The most effective cause of action is that we build massive, erect walking robots on land to battle them.
  • These robots are controlled by pilots who walk inside them and throw punches to box these monsters.
  • These pilots all are super fit, ripped and beautiful and in general could probably take on these monsters in a fight with bare hands. The scientists helping them are helpless nerds.
  • We need to drop the robots with helicopters to where they are needed, because that looks awesome, too.

All in all the movie is borderline insane as if we had a rift like that under water, all we’d need is mine it. Or have some massive ships and submarines where the rift is ready to shoot and bomb anything that comes through. Which, of course, beats trying to communicate with it.

The issue is that this solution would not make for a good blockbuster 3D movie aimed at 13 year olds. Nothing fights or breaks in a fantastic manner and you can’t show crumbling buildings. We’d be stuck with mundane tasks. Like writing a coherent script, proper acting or even manual camera work and settings instead of green screen. We can’t have that.

Tech press hype

What does all that have to do with web development? Well, I get the feeling we got to a world where we try to be awesome for the sake of being awesome. And at the same time we seem to be missing out on the fact that what we have is pretty incredible as it is.

One thing I blame is the tech press. We still get weekly Cinderella stories of the lonely humble developer making it big with his first app (yes, HIS first app). We hear about companies buying each other for billions and everything being incredibly exciting.

Daily frustrations

In stark contrast to that, our daily lives as developers are full of frustrations. People not knowing what we do and thus not really giving us feedback on what we do, for example. We only appear on the scene when things break.

Our users can also be a bit of an annoyance as they do not upgrade the way we want them to and keep using things differently than we anticipated.

And even when we mess up, not much happens. We put our hearts and lots of efforts into our work. When we see something obviously broken or in dire need of improvement we want to fix it. The people above us in the hierarchy, however, are happy to see it as a glitch to fix later.

Flight into professionalism

Instead of working on the obvious broken communication between us and those who use our products and us and those who sell them (or even us and those who maintain our products) I hear a louder and louder call for “professional development”. This involves many abstractions and intelligent package managers and build scripts that automate a lot of the annoying cruft of our craft. Cruft in the form of extraneous code. Code that got there because of mistakes that our awesome new solutions make go away. But isn’t the real question why we still make so many mistakes in the first place?

Apps good, web bad!

One of the things we seem to be craving is to match the state of affairs of native platforms, especially the form factor of apps. Apps seem to be the new, modern form factor of software delivery. Fact is that they are a questionable success (and may even be a step back in software evolution as I put it in a TEDx talk. If you look at who earns something with them and how long they are in use on average it is hard to shout “hooray for apps”. On the web, there is a problem that there are so far no standards that define an app that are working across platforms. If you wonder how far that is going along, the current state of mobile apps on the web describes this in meticulous detail at the W3C.

Generic code is great code?

A lot of what we crave as developers is generic. We don’t want to write code that does one job well, we want to write code that can take any input and does something intelligent with it. This is feel good code. We are not only clever enough to be programmers We also write solutions that prevent people from making mistakes by predicting them.

Fredrik Noren wrote a brilliant piece on this called “On Generalisation“. In it he argues that writing generic code means trying to predict the future and that we are bad at that. He calls out for simpler, more modular and documented code that people can extend instead of catch-all solutions to simple problems.

I found myself nodding along reading this. There seems to be a general tendency to re-invent instead of improving existing solutions. This comes natural for developers – we want to create instead of read and understand. I also blame sites like hacker news which are a beauty pageant of small, quick and super intelligent technical solutions for every conceivable problem out there.

Want some proof? How about Static Site Generators listing 295 different ways to create static HTML pages? Let’s think about that: static HTML pages!

The web is obese!

We try to fix our world by stacking abstractions and creating generic solutions for small issues. The common development process and especially the maintenance process looks different, though.

People using Content Management Systems to upload lots of un-optimised photos are a problem. People using too many of our sleak and clever solutions also add to the fact that that web performance is still a big issue. According to the HTTP Archive the average web site is 2 MB in data delivered in 100(!) HTTP requests. And that years after we told people that each request is a massive cause of a slow and sluggish web experience. How can anyone explain things like the new LG G Watch site clocking in at 54 MB on the first loadLG G Watch site clocking in at 54 MB on the first load whilst being a responsive design?

Tools of awesome

There are no excuses. We have incredible tools that give us massive insight into our work. What we do is not black art any longer, we don’t hope that browsers do good things with our code. We can peek under the hood and see the parts moving.

Webpagetest.org is incredible. It gives us detailed insight into what is going right and wrong in our web sites right in the browser. You can test the performance of a site simulating different speeds and load it from servers all over the world. You get a page optimisation checklist, graphs about what got loaded when and when things started rendering. You even get a video of your page loading and getting ready for users to play with.

There are many resources how to use this tool and others that help us with fixing performance issues. Addy Osmani gave a great talk at CSS Conf in Berlin re-writing the JSConf web site live on stage using many of these tools.

Browsers are incredible tools

Browsers have evolved from simple web consumption tools to full-on development environments. Almost all browsers have some developer tools built in that not only allow you to see the code in the current page but also to debug. You have step-by-step debugging of JavaScript, CSS debugging and live previews of colours, animations, element dimensions, transforms and fonts. You have insight into what was loaded in what sequence, you can see what is in localStorage and you can do performance analysis and see the memory consumption.
The innovation in development tools of browsers is incredible and moves at an amazing speed. You can now even debug on devices connected via USB or wireless and Chrome allows you to simulate various devices and network/connectivity conditions.
Sooner or later this might mean that we won’t any other editors any more. Any user downloading a browser could also become a developer. And that is incredible. But what about older browsers?

Polyfills as a services

A lot of bloat on the web happens because of us trying to give new, cool effects to old, tired browsers. We do this because of a wrong understanding of the web. It is not about giving the same functionality to everybody, but instead to give a working experience to everybody.

The idea of a polyfill is genius: write a solution for an older environment to play with new functionality and get our UX ready for the time browsers support it. It fails to be genius when we never, ever remove the polyfills from our solutions. The Financial Times development team now had a great idea to offer polyfill as a service. This means you include one JavaScript file.

<script src="//cdn.polyfill.io/v1/polyfill.min.js" 
        async defer>
</script>

You can define which functionality you want to polyfill and it’ll be done that way. When the browser supports what you want, the stop-gap solution never gets included at all. How good is that?

Flexbox growing up

Another thing of awesome I saw the other day at CSS tricks. Chris Coyier uses Flexbox to create a toolbar that has fixed elements and others using up the rest of the space. It extends semantic HTML and does a great job being responsive.

rwd-flexbox

All the CSS code needed for it is this:

*, *:before, *:after {
  -moz-box-sizing: inherit;
       box-sizing: inherit;
}
html {
  -moz-box-sizing: border-box;
       box-sizing: border-box;
}
body {
  padding: 20px;
  font: 100% sans-serif;
}
.bar {
  display: -webkit-flex;
  display: -ms-flexbox;
  display: flex;
  -webkit-align-items: center;
      -ms-flex-align: center;
          align-items: center;
  width: 100%;
  background: #eee;
  padding: 20px;
  margin: 0 0 20px 0;
}
.bar > * {
  margin: 0 10px;
}
.icon {
  width: 30px;
  height: 30px;
  background: #ccc;
  border-radius: 50%;
}
.search {
  -webkit-flex: 1;
      -ms-flex: 1;
          flex: 1;
}
.search input {
  width: 100%;
}
.bar-2 .username {
  -webkit-order: 2;
      -ms-flex-order: 2;
          order: 2;
}
.bar-2 .icon-3 {
  -webkit-order: 3;
      -ms-flex-order: 3;
          order: 3;
}
.bar-3 .search {
  -webkit-order: -1;
      -ms-flex-order: -1;
          order: -1;
}
.bar-3 .username {
  -webkit-order: 1;
      -ms-flex-order: 1;
          order: 1;
}
.no-flexbox .bar {
  display: table;
  border-spacing: 15px;
  padding: 0;
}
.no-flexbox .bar > * {
  display: table-cell;
  vertical-align: middle;
  white-space: nowrap;
}
.no-flexbox .username {
  width: 1px;
}
@media (max-width: 650px) {
  .bar {
    -webkit-flex-wrap: wrap;
        -ms-flex-wrap: wrap;
            flex-wrap: wrap;
  }
  .icon {
    -webkit-order: 0 !important;
        -ms-flex-order: 0 !important;
            order: 0 !important;
  }
  .username {
    -webkit-order: 1 !important;
        -ms-flex-order: 1 !important;
            order: 1 !important;
    width: 100%;
    margin: 15px;
  }
 
  .search {
    -webkit-order: 2 !important;
        -ms-flex-order: 2 !important;
            order: 2 !important;
    width: 100%;
  }
}

That is pretty incredible, isn’t it?

More near-future tech of awesome

Other things that are brewing get me equally excited. WebRTC, WebGL, Web Audio and many more things are pointing to a high fidelity web. A web that allows for rich gaming experiences and productivity tools built right into the browser. We can video and audio chat with each other and send data in a peer-to-peer fashion without relying or burning up a server between us.

Service Workers will allow us to build a real offline experience. With AppCahse we’re hoping users will get something and not aggressively cache outdated information. If you want to know more about that watch these two amazing videos by Jake Archibald: The Service Worker: The Network layer that is yours to own and The Service worker is coming, look busy!

Web Components have been the near future for quite a while now and seem to be in a bit of a “let’s build a framework instead” rut. Phil Legetter has done and incredible job collecting what that looks like. It is true: the support of Shadow DOM across the board is still not quite there. But a lot of these frameworks offer incredible client-side functionality to go into the standard.

What can you do?

I think it is time to stop chasing the awesome of “soon we will be able to use that” and instead be more fearless about using what we have now. We love to write about just how broken things are when they are in their infancy. We tend to forget to re-visit them when they’ve matured more. Many things that were a fever dream a year ago are now ready for you to roll out – if you work with progressive enhancement. In general, this is a safe bet as the web will never be in a finished state. Even native platforms are only in a fixed state between major releases. Mattias Petter Johansson of Spotify put it quite succinctly in a thread why JavaScript is the only client side language:

Hating JavaScript is like hating the Internet.
The Internet is a cobweb of different technologies cobbled together with duct tape, string and chewing gum. It’s not elegantly designed in any way, because it’s more of a growing organism than it is a machine constructed with intent.

The web is chaotic, so much for sure, but it also aims to be longer lasting than other platforms. The in-built backwards compatibility of its technologies makes it a beautiful investment. As Paul Bakaus of Google put it:

If you build a web app today, it will run in browsers 10 years from now. Good luck trying the same with your favorite mobile OS (excluding Firefox OS).

The other issue we have to overcome is the dogma associated with some of our decisions. Yes, it would be excellent if we could use open web standards to build everything. It would be great if all solutions had their principles of distribution, readability and easy sharing. But we live in a world that has changed. In many ways in the mobile space we have to count our blessings. We can and should allow some closed technology take its course before we go back to these principles. We’ve done it with Flash, we can do it with others, too. My mantra these days is the following:

If you enable people world-wide to get a good experience and solve a problem they have, I like it. The technology you use is not the important part. How much you lock them in is. Don’t lock people in.

Go share and teach

One thing is for sure: we never had a more amazing environment to learn and share. Services like GitHub, JSFiddle, JSBin and Codepen make it easy to distribute and explain code. You can show instead of describe and you can fix instead of telling people that they are doing wrong. There is no better way to learn than to show and if you set out to teach you end up learning.

A great demo of this is together.js. Using this WebRTC based tool (or its implementation in JSFiddle by hitting the collaborate button) you can code together, with several cursors, audio chat or a text chat client directly in the browser. You explain in context and collaborate live. And you make each other learn something and get better. And this is what is really awesome.

Marco BonardoMozilla at the JS Day 2012 in Verona

Allison NaaktgeborenApplying Privacy Series: The 2nd meeting

The day after the first meeting…

Engineering Manager: Welcome DBA, Operations Engineer, and Privacy Officer. Did you all get a chance to look over the project wiki? What do you think?

Operations Engineer: I did.

DBA: Yup, and I have some questions.

Privacy Officer: Sounds really cool, as long as we’re careful.

Engineer: We’re always careful!

DBA: There are a lot of pages on the web, Keeping that much data is going to be expensive. I didn’t see anything on the wiki about evicting entries and for a table that big, we’ll need to do that regularly.

Privacy Officer: Also, when will we delete the device ids? Those are like a fingerprint for someone’s phone, so keeping them around longer than absolutely necessary increases risk for the user & the company’s risk.

Operations Engineer: The less we keep around, the less it costs to maintain.

Engineer: We know that most mobile users have only 1-3 pages open at any given time and we estimate no more than 50,000 users will be eligible for the service.

DBA: Well that does suggest a manageable load, but that doesn’t answer my question.

Engineer: Want to say if a page hasn’t been accessed in 48 hours we evict it from the server? And we can tune that knob as necessary?

Operations Engineer: As long as I can tune it in prod if something goes haywire.

Privacy Officer:: And device ids?

Engineer: Apply the same rule to them?

Engineering Manager: 48 hours would be too short. Not everyone uses their mobile browser every day. I’d be more comfortable with 90 days to start.

DBA: I imagine you’d want secure destruction for the ids.

Privacy Officer:: You got it!

DBA: what about the backup tapes? We back up the dbs regularly?

Privacy Officer:: are the backups online?

DBA: No, like I said, they’re on tape. Someone has to physically run ‘em through a machine. You’d need physical access to the backup storage facility.

Privacy Officer:: Then it’s probably fine if we don’t delete from the tapes.

Operations Engineer: What is the current timeline?

Engineer: End of the quarter, 8 weeks or so.

Operations Engineer: We’re under water right now, so it might be tight getting the hardware in & set up. New hardware orders usually take 6 weeks to arrive. I can’t promise the hardware will be ready in time.

Engineering Manager: We understand, please do your best and if we have to, Product Manager won’t be happy, but we’ll delay the feature if we need to.

Privacy Officer:: Who’s going to be responsible for the data on the stage & production servers?

Engineering Manager: Product Manager has final say.

DBA: thanks. good to know!

Engineer: I’ll draw up a plan  and send it around for feedback tomorrow.

 

Who brought up user data safety & privacy concerns in this conversation?

Privacy Officer is obvious. The DBA & Operations Engineer also raised privacy concerns.

Robert HelmerBetter Source Code Browsing With FreeBSD and Mozilla DXR

Lately I've been reading about the design and implementation of the FreeBSD Operating System (great book, you should read it).

However I find browsing the source code quite painful. Using vim or emacs is fine for editing invidual files, but when you are trying to understand and browse around a large codebase, dropping to a shell and grepping/finding around gets old fast. I know about ctags and similar, but I also find editors uncomfortable for browsing large codebases for an extended amount of time - web pages tend to be easier on the eyes.

There's an LXR fork called FXR available, which is way better and I am very grateful for it - however it has all the same shortcomings LXR that we've become very familiar with on the Mozilla LXR fork (MXR):

  • based on regex, not static analysis of the code - sometimes it gets things wrong, and it doesn't really understand the difference between a variable with the same name in different files
  • not particularly easy on the eyes (shallow and easily fixable, I know)

I've been an admirer of Mozilla's next gen code browsing tool, DXR, for a long time now. DXR uses a clang plugin to do static analysis of the code, so it produces the real call graph - this means it doesn't need to guess at the definition of types or where a variable is used, it knows.

A good example is to contrast a file on MXR with the same file on DXR. Let's say you wanted to know where this macro was first defined, that's easy in DXR - just click on the word "NS_WARNING" and select "Jump to definition".

Now try that on MXR - clicking on "NS_WARNING" instead yields a search which is not particularly helpful, since it shows every place in the codebase that the word "NS_WARNING" appears (note that DXR has the ability to do this same type of search, in case that's really what you're after).

So that's what DXR is and why it's useful. I got frustrated enough with the status quo trying to grok the FreeBSD sources that I took a few days and the with help of folks in the #static channel on irc.mozilla.org (particularly Erik Rose) to get DXR running on FreeBSD and indexed a tiny part of the source tree as a proof-of-concept (the source for "/bin/cat"):

http://freebsdxr.rhelmer.org

This is running on a FreeBSD instance in AWS.

DXR is currently undergoing major changes, SQLite to ElasticSearch transition being the central one. I am tracking how to get the "es" branch of DXR going in this gist.

Currently I am able to get a LINT kernel build indexed on DXR master branch, but still working through issues on the "es" branch.

Overall, I feel like I've learned way more about static analysis, how DXR works, FreeBSD source code and produced some useful patches for the Mozilla and the DXR project and hopefully will provide a useful resource for the FreeBSD project, all along the way. Totally worth it, I highly recommended working with all of the aforementioned :)

Brian R. BondyDeveloping and releasing the Khan Academy Firefox OS app

I'm happy to announce that the Khan Academy Firefox OS app is now available in the Firefox Marketplace!

Khan Academy’s mission is to provide a free world-class education for anyone anywhere. The goal of the Firefox OS app is to help with the “anyone anywhere” part of the KA mission.

Why?

There's something exciting about being able to hold a world class education in your pocket for the cheap price of a Firefox OS phone. Firefox OS devices are mostly deployed in countries where the cost of an iPhone or Android based smart phone is out of reach for most people.

The app enables developing countries, lower income families, and anyone else to take advantage of the Khan Academy content. A persistent internet connection is not required.

What's that.... you say you want another use case? Well OK, here goes: A parent wanting each of their kids to have access to Khan Academy at the same time could be very expensive in device costs. Not anymore.

Screenshots!

App features

  • Access to the full library of Khan Academy videos and articles.
  • Search for videos and articles.
  • Ability to sign into your account for:
  • Profile access.
  • Earning points for watching videos.
  • Continuing where you left off from previous partial video watches, even if that was on the live site.
  • Partial and full completion status of videos and articles.
  • Downloading videos, articles, or entire topics for later use.
  • Sharing functionality.
  • Significant effort was put in, to minify topic tree sizes for minimal memory use and faster loading.
  • Scrolling transcripts for videos as you watch.
  • The UI is highly influenced by the first generation iPhone app.

Development statistics

  • 340 commits
  • 4 months of consecutive commits with at least 1 commit per day
  • 30 minutes - 2 hours per day max

Technologies used

Technologies used to develop the app include:

Localization

The app is fully localized for English, Portuguese, French, and Spanish, and will use those locales automatically depending on the system locale. The content (videos, articles, subtitles) that the app hosts will also automatically change.

I was lucky enough to have several amazing and kind translators for the app volunteer their time.

The translations are hosted and managed on Transifex.

Want to contribute?

The Khan Academy Firefox OS app source is hosted in one of my github repositories and periodically mirrored on the Khan Academy github page.

If you'd like to contribute there's a lot of future tasks posted as issues on github.

Current minimum system requirements

  • Around 8MB of space.
  • 512 MB of RAM

Low memory devices

By default, apps on the Firefox marketplace are only served to devices with at least 500MB of RAM. To get them on 256MB devices, you need to do a low memory review.

One of the major enhancements I'd like to add next, is to add an option to use the YouTube player instead of HTML5 video. This may use less memory and may be a way onto 256MB devices.

How about exercises?

They're coming in a future release.

Getting preinstalled on devices

It's possible to request to get pre-installed on devices and I'll be looking into that in the near future after getting some more initial feedback.

Projects like Matchstick also seem like a great opportunity for this app.

Hannah KaneWe are very engaging

Yesterday someone asked me what the engagement team is up to, and it made me sad because I realized I need to do a waaaaay better job of broadcasting my team’s work. This team is dope and you need to know about it.

As a refresher, our work encompasses these areas:

  • Grantwriting
  • Institutional partnerships
  • Marketing and communications
  • Small dollar fundraising
  • Production work (i.e. Studio Mofo)

In short, we aim to support the Webmaker product and programs and our leadership pipelines any time we need to engage individuals or institutions.

What’s currently on our plate:

Pro-tip: You can always see what we’re up to by checking out the Engagement Team Workbench.

These days we’re spending our time on the following:

  • End of Year Fundraising: With the help of a slew of kick-ass engineers, Andrea and Kelli are getting to $2M. (view the Workbench).
  • Mozilla Gear launch: Andrea and Geoffrey are obsessed with branded hoodies. To complement our fundraising efforts, they just opened a brand new site for people to purchase Mozilla Gear (view the project management spreadsheet).
  • Fall Campaign: Remember the 10K contributor goal? We do! An-Me and Paul have been working with Claw, Amira, Michelle, and Lainie, among others, to close the gap through a partner-based strategy (view the Workbench).
  • Mobile Opportunity: Ben is helping to envision and build partnerships around this work, and Paul and Studio Mofo are providing marketing, comms, and production support (the Mobile Opportunity Workbench is here, the engagement-specific work will be detailed soon).
  • Building a Webmaker Marketing Plan for 2015: The site and programs aren’t going to market themselves! Paul is drafting a comprehensive marketing calendar for 2015 that complements the product and program strategies. (plan coming soon)
  • 2015 Grants Pipeline: Ben and An-Me are always on the lookout for opportunities, and Lynn is responsible for writing grants and reports to fund our various programs and initiatives.
  • Additional Studio Mofo projects: Erika, Mavis, and Sabrina are always working on something. In addition to their work supporting most of the above, you can see a full list of projects here.
  • Salesforce for grants and partnerships: We’ve completed a custom Salesforce installation and Ben has begun the process of training staff to use it. Much more to come to make it a meaningful part of our workflow (Workbench coming soon).
  • Open Web Fellows recruitment: We’re supporting our newest fellowship with marketing support (view the Hype Plan)

Niko MatsakisPurging proc

The so-called “unboxed closure” implementation in Rust has reached the point where it is time to start using it in the standard library. As a starting point, I have a pull request that removes proc from the language. I started on this because I thought it’d be easier than replacing closures, but it turns out that there are a few subtle points to this transition.

I am writing this blog post to explain what changes are in store and give guidance on how people can port existing code to stop using proc. This post is basically targeted Rust devs who want to adapt existing code, though it also covers the closure design in general.

To some extent, the advice in this post is a snapshot of the current Rust master. Some of it is specifically targeting temporary limitations in the compiler that we aim to lift by 1.0 or shortly thereafter. I have tried to mention when that is the case.

The new closure design in a nutshell

For those who haven’t been following, Rust is moving to a powerful new closure design (sometimes called unboxed closures). This part of the post covers the highlight of the new design. If you’re already familiar, you may wish to skip ahead to the “Transitioning away from proc” section.

The basic idea of the new design is to unify closures and traits. The first part of the design is that function calls become an overloadable operator. There are three possible traits that one can use to overload ():

1
2
3
trait Fn<A,R> { fn call(&self, args: A) -> R };
trait FnMut<A,R> { fn call_mut(&mut self, args: A) -> R };
trait FnOnce<A,R> { fn call_once(self, args: A) -> R };

As you can see, these traits differ only in their “self” parameter. In fact, they correspond directly to the three “modes” of Rust operation:

  • The Fn trait is analogous to a “shared reference” – it means that the closure can be aliased and called freely, but in turn the closure cannot mutate its environment.
  • The FnMut trait is analogous to a “mutable reference” – it means that the closure cannot be aliased, but in turn the closure is permitted to mutate its environment. This is how || closures work in the language today.
  • The FnOnce trait is analogous to “ownership” – it means that the closure can only be called once. This allows the closure to move out of its environment. This is how proc closures work today.

Enabling static dispatch

One downside of the older Rust closure design is that closures and procs always implied virtual dispatch. In the case of procs, there was also an implied allocation. By using traits, the newer design allows the user to choose between static and virtual dispatch. Generic types use static dispatch but require monomorphization, and object types use dynamic dispatch and hence avoid monomorphization and grant somewhat more flexibility.

As an example, whereas before I might write a function that takes a closure argument as follows:

1
2
3
4
5
fn foo(hashfn: |&String| -> uint) {
    let x = format!("Foo");
    let hash = hashfn(&x);
    ...
}

I can now choose to write that function in one of two ways. I can use a generic type parameter to avoid virtual dispatch:

1
2
3
4
5
6
7
fn foo<F>(hashfn: F)
    where F : FnMut(&String) -> uint
{
    let x = format!("Foo");
    let hash = hashfn(&x);
    ...
}

Note that we write the type parameters to FnMut using parentheses syntax (FnMut(&String) -> uint). This is a convenient syntactic sugar that winds up mapping to a traditional trait reference (currently, for<'a> FnMut<(&'a String,), uint>). At the moment, though, you are required to use the parentheses form, because we wish to retain the liberty to change precisely how the Fn trait type parameters work.

A caller of foo() might write:

1
2
let some_salt: String = ...;
foo(|str| myhashfn(str.as_slice(), &some_salt))

You can see that the || expression still denotes a closure. In fact, the best way to think of it is that a || expression generates a fresh structure that has one field for each of the variables it touches. It is as if the user wrote:

1
2
3
let some_salt: String = ...;
let closure = ClosureEnvironment { some_salt: &some_salt };
foo(closure);

where ClosureEnvironment is a struct like the following:

1
2
3
4
5
6
7
8
9
struct ClosureEnvironment<'env> {
    some_salt: &'env String
}

impl<'env,'arg> FnMut(&'arg String) -> uint for ClosureEnvironment<'env> {
    fn call_mut(&mut self, (str,): (&'arg String,)) -> uint {
        myhashfn(str.as_slice(), &self.some_salt)
    }
}

Obviously the || form is quite a bit shorter.

Using object types to get virtual dispatch

The downside of using generic type parameters for closures is that you will get a distinct copy of the fn being called for every callsite. This is a great boon to inlining (at least sometimes), but it can also lead to a lot of code bloat. It’s also often just not practical: many times we want to combine different kinds of closures together into a single vector. None of these concerns are specific to closures. The same things arise when using traits in general. The nice thing about the new closure design is that it lets us use the same tool – object types – in both cases.

If I wanted to write my foo() function to avoid monomorphization, I might change it from:

1
2
3
fn foo<F>(hashfn: F)
    where F : FnMut(&String) -> uint
{...}

to:

1
2
fn foo(hashfn: &mut FnMut(&String) -> uint) {
{...}

Note that the argument is now a &mut FnMut(&String) -> uint, rather than being of some type F where F : FnMut(&String) -> uint.

One downside of changing the signature of foo() as I showed is that the caller has to change as well. Instead of writing:

1
foo(|str| ...)

the caller must now write:

1
foo(&mut |str| ...)

Therefore, what I expect to be a very common pattern is to have a “wrapper” that is generic which calls into a non-generic inner function:

1
2
3
4
5
6
7
8
fn foo<F>(hashfn: F)
    where F : FnMut(&String) -> uint
{
    foo_obj(&mut hashfn)
}

fn foo_obj(hashfn: &mut FnMut(&String) -> uint)
{...}

This way, the caller does not have to change, and only this outer wrapper is monomorphized, and it will likely be inlined away, and the “guts” of the function remain using virtual dispatch.

In the future, I’d like to make it possible to pass object types (and other “unsized” types) by value, so that one could write a function that just takes a FnMut() and not a &mut FnMut():

1
2
fn foo(hashfn: FnMut(&String) -> uint) {
{...}

Among other things, this makes it possible to transition simply between static and virtual dispatch without altering callers and without creating a wrapper fn. However, it would compile down to roughly the same thing as the wrapper fn in the end, though with guaranteed inlining. This change requires somewhat more design and will almost surely not occur by 1.0, however.

Specifying the closure type explicitly

We just said that every closure expression like || expr generates a fresh type that implements one of the three traits (Fn, FnMut, or FnOnce). But how does the compiler decide which of the three traits to use?

Currently, the compiler is able to do this inference based on the surrouding context – basically, the closure was an argument to a function, and that function requested a specific kind of closure, so the compiler assumes that’s the one you want. (In our example, the function foo() required an argument of type F where F implements FnMut.) In the future, I hope to improve the inference to a more general scheme.

Because the current inference scheme is limited, you will sometimes need to specify which of the three fn traits you want explicitly. (Some people also just prefer to do that.) The current syntax is to use a leading &:, &mut:, or :, kind of like an “anonymous parameter”:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// Explicitly create a `Fn` closure which cannot mutate its
// environment. Even though `foo()` requested `FnMut`, this closure
// can still be used, because a `Fn` closure is more general
// than `FnMut`.
foo(|&:| { ... })

// Explicitly create a `FnMut` closure. This is what the
// inference would select anyway.
foo(|&mut:| { ... })

// Explicitly create a `FnOnce` closure. This would yield an
// error, because `foo` requires a closure it can call multiple
// times in a row, but it is being given a closure that can be
// called exactly once.
foo(|:| { ... }) // (ERROR)

The main time you need to use an explicit fn type annotation is when there is no context. For example, if you were just to create a closure and assign it to a local variable, then a fn type annotation is required:

1
let c = |&mut:| { ... };

Caveat: It is still possible we’ll change the &:/&mut:/: syntax before 1.0; if we can improve inference enough, we might even get rid of it altogether.

Moving vs non-moving closures

There is one final aspect of closures that is worth covering. We gave the example of a closure |str| myhashfn(str.as_slice(), &some_salt) that expands to something like:

1
2
3
struct ClosureEnvironment<'env> {
    some_salt: &'env String
}

Note that the variable some_salt that is used from the surrounding environment is borrowed (that is, the struct stores a reference to the string, not the string itself). This is frequently what you want, because it means that the closure just references things from the enclosing stack frame. This also allows closures to modify local variables in place.

However, capturing upvars by reference has the downside that the closure is tied to the stack frame that created it. This is a problem if you would like to return the closure, or use it to spawn another thread, etc.

For this reason, closures can also take ownership of the things that they close over. This is indicated by using the move keyword before the closure itself (because the closure “moves” things out of the surrounding environment and into the closure). Hence if we change that same closure expression we saw before to use move:

1
move |str| myhashfn(str.as_slice(), &some_salt)

then it would generate a closure type where the some_salt variable is owned, rather than being a reference:

1
2
3
struct ClosureEnvironment {
    some_salt: String
}

This is the same behavior that proc has. Hence, whenever we replace a proc expression, we generally want a moving closure.

Currently we never infer whether a closure should be move or not. In the future, we may be able to infer the move keyword in some cases, but it will never be 100% (specifically, it should be possible to infer that the closure passed to spawn should always take ownership of its environment, since it must meet the 'static bound, which is not possible any other way).

Transitioning away from proc

This section covers what you need to do to modify code that was using proc so that it works once proc is removed.

Transitioning away from proc for library users

For users of the standard library, the transition away from proc is fairly straightforward. Mostly it means that code which used to write proc() { ... } to create a “procedure” should now use move|| { ... }, to create a “moving closure”. The idea of a moving closure is that it is a closure which takes ownership of the variables in its environment. (Eventually, we expect to be able to infer whether or not a closure must be moving in many, though not all, cases, but for now you must write it explicitly.)

Hence converting calls to libstd APIs is mostly a matter of search-and-replace:

1
2
3
4
5
Thread::spawn(proc() { ... }) // becomes:
Thread::spawn(move|| { ... })

task::try(proc() { ... }) // becomes:
task::try(move|| { ... })

One non-obvious case is when you are creating a “free-standing” proc:

1
let x = proc() { ... };

In that case, if you simply write move||, you will get some strange errors:

1
let x = move|| { ... };

The problem is that, as discussed before, the compiler needs context to determine what sort of closure you want (that is, Fn vs FnMut vs FnOnce). Therefore it is necessary to explicitly declare the sort of closure using the : syntax:

1
2
let x = proc() { ... }; // becomes:
let x = move|:| { ... };

Note also that it is precisely when there is no context that you must also specify the types of any parameters. Hence something like:

1
2
3
4
5
6
7
8
let x = proc(x:int) foo(x * 2, y);
//      ~~~~ ~~~~~
//       |     |
//       |     |
//       |     |
//       |   No context, specify type of parameters.
//       |
//      proc always owns variables it touches (e.g., `y`)

might become:

1
2
3
4
5
6
7
8
let x = move|: x:int| foo(x * 2, y);
//      ~~~~ ^ ~~~~~
//       |   |   |
//       |   |  No context, specify type of parameters.
//       |   |
//       |   No context, also specify FnOnce.
//       |
//     `move` keyword means that closure owns `y`

Transitioning away from proc for library authors

The transition story for a library author is somewhat more complicated. The complication is that the equivalent of a type like proc():Send ought to be Box<FnOnce() + Send> – that is, a boxed FnOnce object that is also sendable. However, we don’t currently have support for invoking fn(self) methods through an object, which means that if you have a Box<FnOnce()> object, you can’t call it’s call_once method (put another way, the FnOnce trait is not object safe). We plan to fix this – possibly by 1.0, but possibly shortly thereafter – but in the interim, there are workarounds you can use.

In the standard library, we use a trait called Invoke (and, for convenience, a type called Thunk). You’ll note that although these two types are publicly available (under std::thunk), these types do not appear in the public interface any other stable APIs. That is, Thunk and Invoke are essentially implementation details that end users do not have to know about. We recommend you follow the same practice. This is for two reasons:

  1. It generally makes for a better API. People would rather write Thread::spawn(move|| ...) and not Thread::spawn(Thunk::new(move|| ...)) (etc).
  2. Eventually, once Box<FnOnce()> works properly, Thunk and Invoke may be come deprecated. If this were to happen, your public API would be unaffected.

Basically, the idea is to follow the “thin wrapper” pattern that I showed earlier for hiding virtual dispatch. If you recall, I gave the example of a function foo that wished to use virtual dispatch internally but to hide that fact from its clients. It did do by creating a thin wrapper API that just called into another API, performing the object coercion:

1
2
3
4
5
6
7
8
fn foo<F>(hashfn: F)
    where F : FnMut(&String) -> uint
{
    foo_obj(&mut hashfn)
}

fn foo_obj(hashfn: &mut FnMut(&String) -> uint)
{...}

The idea with Invoke is similar. The public APIs are generic APIs that accept any FnOnce value. These just turnaround and wrap that value up into an object. Here the problem is that while we would probably prefer to use a Box<FnOnce()> object, we can’t because FnOnce is not (currently) object-safe. Therefore, we use the trait Invoke (I’ll show you how Invoke is defined shortly, just let me finish this example):

1
2
3
4
5
6
7
8
9
10
pub fn spawn<F>(taskbody: F)
    where F : FnOnce(), F : Send
{
    spawn_inner(box taskbody)
}

fn spawn_inner(taskbody: Box<Invoke+Send>)
{
    ...
}

The Invoke trait in the standard library is defined as:

1
2
3
trait Invoke<A=(),R=()> {
    fn invoke(self: Box<Self>, arg: A) -> R;
}

This is basically the same as FnOnce, except that the self type is Box<Self>, and not Self. This means that Invoke requires allocation to use; it is really tailed for object types, unlike FnOnce.

Finally, we can provide a bridge impl for the Invoke trait as follows:

1
2
3
4
5
6
7
8
impl<A,R,F> Invoke<A,R> for F
    where F : FnOnce(A) -> R
{
    fn invoke(self: Box<F>, arg: A) -> R {
        let f = *self;
        f(arg)
    }
}

This impl allows any type that implements FnOnce to use the Invoke trait.

High-level summary

Here are the points I want you to take away from this post:

  1. As a library consumer, the latest changes mostly just mean replacing proc() with move|| (sometimes move|:| if there is no surrounding context).
  2. As a library author, your public interface should be generic with respect to one of the Fn traits. You can then convert to an object internally to use virtual dispatch.
  3. Because Box<FnOnce()> doesn’t currently work, library authors may want to use another trait internally, such as std::thunk::Invoke.

I also want to emphasize that a lot of the nitty gritty details in this post are transitionary. Eventually, I believe we can reach a point where:

  1. It is never (or virtually never) necessary to explicitly declare Fn vs FnMut vs FnOnce explicitly.
  2. We can frequently (though not always) infer the keyword move.
  3. Box<FnOnce()> works, so Invoke and friends are not needed.
  4. The choice between static and virtual dispatch can be changed without affecting users and without requiring wrapper functions.

I expect the improvements in inference before 1.0. Fixing the final two points is harder and so we will have to see where it falls on the schedule, but if it cannot be done for 1.0 then I would expect to see those changes shortly thereafter.

Jared WeinThe Bugs Blocking In-Content Prefs, part 2

Season's greetingsAt the beginning of November I published a blog post with the list of bugs that are blocking in-content prefs from shipping. Since that post, quite a few bugs have been fixed and we figured out an approach for fixing most of the high-contrast bugs.

As in the last post, bugs that should be easy to fix for a newcomer are highlighted in yellow.

Here is the new list of bugs that are blocking the release:

The list is now down to 16 bugs (from 20). In the meantime, the following bugs have been fixed:

  • Bug 1022578: Can’t tell what category is selected in about:preferences when using High Contrast mode
  • Bug 1022579: Help buttons in about:preferences have no icon when using High Contrast mode
  • Bug 1012410: Can’t close in-content cookie exceptions dialog
  • Bug 1089812: Implement updated In-content pref secondary dialogs

Big thanks goes out to Richard Marti and Tim Nguyen for fixing the above mentioned bugs as well as their continued focus on helping to bring the In-Content Preferences to to the Beta and Release channels.


Tagged: firefox, planet-mozilla, ux

Lucas RochaLeaving Mozilla

I joined Mozilla 3 years, 4 months, and 6 days ago. Time flies!

I was very lucky to join the company a few months before the Firefox Mobile team decided to rewrite the product from scratch to make it more competitive on Android. And we made it: Firefox for Android is now one of the highest rated mobile browsers on Android!

This has been the best team I’ve ever worked with. The talent, energy, and trust within the Firefox for Android group are simply amazing.

I’ve thoroughly enjoyed my time here but an exciting opportunity outside Mozilla came up and decided to take it.

What’s next? That’s a topic for another post ;-)

Will Kahn-GreeneInput: New feedback form

Since the beginning of 2014, I've been laying the groundwork to rewrite the feedback form that we use on Input.

Today, after a lot of work, we pushed out the new form! Just in time for Firefox 34 release.

This blog post covers the circumstances of the rewrite.

Why?

In 2011, James, Mike and I rewrote Input from the ground up. In order to reduce the amount of time it took to do that rewrite, we copied a lot of the existing forms and styles including the feedback forms. At that time, there were two: one for desktop and one for mobile. In order to avoid a translation round, we kept all the original strings of the two forms. The word "Firefox" was hardcoded in the strings, but that was fine since at the time Input only collected feedback for Firefox.

In 2013, in order to reduce complexity on the site because there's only one developer (me), I merged the desktop and mobile forms into one form. In order to avoid a translation round, I continued to keep the original strings. The wording became awkward and the flow through the form wasn't very smooth. Further, the form wasn't responsive at all, so it worked ok on desktop machines, but mediocre on other viewport sizes.

2014 rolled around and it was clear Input was going to need to branch out into capturing feedback for multiple products---some of which were not Firefox. The form made this difficult.

Related, the smoketest framework I wrote in 2014 struggled with testing the form accurately. I spent some time tweaking it, but a simpler form would make smoketesting a lot easier and less flakey.

Thus over the course of 3 years, we had accumulated the following problems:

  1. The flow through the form felt awkward, instructions weren't clear and information about what data would be public and what data would be private wasn't clear.
  2. Strings had "Firefox" hardcoded and wouldn't support multiple products.
  3. The form wasn't responsive and looked/behaved poorly in a variety of situations.
  4. The form never worked in right-to-left languages and possibly had other accessibility issues.
  5. The architecture didn't let us experiment with the form---tweaking the wording, switching to a more granular gradient of sentiment, capturing other data, etc.

Further, we were seeing many instances of people putting contact information in the description field and there was a significant amount of dropoff.

I had accrued the following theories:

  1. Since the email address is on the third card, users would put their email address in the description field because they didn't know they could leave their contact information later.
  2. Having two cards would reduce the amount of drop-off and unfinished forms than three cards.
  3. Having simpler instruction text would reduce the amount of drop-off.

Anyhow, it was due for an overhaul.

So what's changed?

I've been working on the overhaul for most of 2014, but did the bulk of the work in October and November. It has the following changes:

  1. The new form is shorter and clearer text-wise and design-wise.
  2. It consists of two cards: one for capturing sentiment and one for capturing details about that sentiment.
  3. It clearly delineates data that will be public from data that will be kept private.
  4. It works with LTR and RTL languages (If that's not true, please open a bug.)
  5. It fixes some accessibility issues. (If you find any, please open a bug.)
  6. It uses responsive design, mobile first. Thus it was designed for mobile devices and then scaled to desktop-sized viewports.
  7. It's smaller in kb size and requires fewer HTTP requests.
  8. It's got a better architecture for future development.
  9. It doesn't have "Firefox" hardcoded anymore.
  10. It's simpler so the smoketests work reliably now.
The old Input feedback form.

The old Input feedback form.

The new Input feedback form.

The new Input feedback form.

Note: Showing before and after isn't particularly exciting since this is only the first card of the form in both cases.

Going forward

The old and new forms were instrumented in various ways, so we'll be able to analyze differences between the two. Particularly, we'll be able to see if the new form performs worse.

Further, I'll be checking the data to see if my theories hold true especially the one regarding why people put contact data in the description.

There are a few changes in the queue that we want to make over the course of the next 6 months. Now that the new form has landed, we can start working on those.

Even if there are problems with the new form, we're in a much better position to fix them than we were before. Progress has been made!

Take a moment---try out the form and tell us YOUR feedback

Have you ever submitted feedback? Have you ever told Mozilla what you like and don't like about Firefox?

Take a moment and fill out the feedback form and tell us how you feel about Firefox.

Thanks, etc

I've been doing web development since 1997 or so. I did a lot of frontend work back then, but I haven't done anything serious frontend-wise in the last 5 years. Thus this was a big project for me.

I had a lot of help: Ricky, Mike and Rehan from the SUMO Engineering team were invaluable reviewing code, helping me fix issues and giving me a huge corpus of examples to learn from; Matt, Gregg, Tyler, Ilana, Robert and Cheng from the User Advocacy team who spent a lot of time smoothing out the rough edges of the new form so it captures the data we need; Schalk who wrote the product picker which I later tweaked; Matej who spent time proof-reading the strings to make sure they were consistent and felt good; the QA team which wrote the code that I copied and absorbed into the current Input smoketests; and the people who translated the user interface strings (and found a bunch of issues) making it possible for people to see this form in their language.

Brian R. BondySQL on Khan Academy enabled by SQLite, sqljs, asm.js and Emscripten

Originally the computer programming section at Khan Academy only focused on learning JavaScript by using ProcessingJS. This still remains our biggest environment and still has lots of plans for further growth, but we recently generalized and abstracted the whole framework to allow for new environments.

The first environment we added was HTML/CSS which was announced here. You can try it out here. We also have a lot of content for learning how to make webpages already created.

SQL on Khan Academy

We recently also experimented with the ability to teach SQL on Khan Academy. This wasn't a near term priority for us, so we used our hack week as an opportunity to bring an SQL environment to Khan Academy.

You can try out the SQL environment here.

Implementation

To implement the environment, one would think of WebSQL, but there are a couple major browser vendors (Mozilla and Microsoft) who do not plan to implement it and W3C stopped working on the specification at the end of 2010,

Our implementation of SQL is based off of SQLite which is compiled down to asm.js by Emscripten packaged into sqljs.

All of these technologies I just mentioned, other than SQLite which is sponsored by Mozilla, are Mozilla based projects. In particular, largely thanks to Alon Zakai.

The environment

The environment looks like this, the entire code for creating, inserting, updating, and querying a database occur in a single editor. Behind the scenes, we re-create the entire state of the database and result sets on each code edit. Things run smoothly in the browser and you don't notice that.

Unlike many online SQL tutorials, this environment is entirely client side. It has no limitations on what you can do, and if we wanted, we could even let you export the SQL databases you create.

One of the other main highlights is that you can modify the inserts in the editor, and see the results in real time without having to run the code. This can lead to some cool insights on how changing data affects aggregate queries.

Hour of Code

Unlike the HTML/CSS work, we don’t have a huge number of tutorials created, but we do have some videos, coding talk throughs, challenges and a project setup in a single tutorial which we’ll be using for one of our hour of code offerings: Hour of Databases.

Doug Belshaw[DRAFT] Toward The Development of a Web Literacy Map: Exploring, Building, and Connecting Online

The title of this post is also the title of a presentation I’m giving at the Literacy Research Association conference next week. The conference has the theme ‘The Dialogic Construction of Literacies’ – so this session is a great fit. It’s been organised by Ian O'Byrne and Greg McVerry, both researchers and Mozilla contributors.

Tiger

I’m cutting short my participation in the Mozilla work week in Portland, Oregon next week to fly to present at this conference. This is not only because I think it’s important to honour prior commitments, but because I want to encourage more literacy researchers to get involved in developing the Web Literacy Map.

I’ve drafted the talk in the style in which I’d deliver it. The idea isn’t to read it, but to use this to ensure that my presentation is backed up by slides, rather than vice-versa. I’ll then craft speaker notes to ensure I approximate what’s written here.

Click here to read the text of the draft presentation

I’d very much appreciate your feedback. Here’s the specific things I’m looking for answers to:

  • Gaps - what have I missed?
  • Structure - does it 'flow’?
  • Red flags - is there anything in there liable to cause problems/issues?

I’ve created a thread on the #TeachTheWeb discussion forum for your responses - or you can email me directly: doug@mozillafoundation.org

Thanks in advance! And remember, it doesn’t matter how new you are to the Web Literacy Map or the process of creating it. I’m interested in the views of newbies and veterans alike.

\(@ ̄∇ ̄@)/

Doug BelshawToward The Development of a Web Literacy Map: Exploring, Building, and Connecting Online

The title of this post is also the title of a presentation I’m giving at the Literacy Research Association conference next week. The conference has the theme ‘The Dialogic Construction of Literacies’ – so this session is a great fit. It’s been organised by Ian O'Byrne and Greg McVerry, both researchers and Mozilla contributors.

Tiger

I’m cutting short my participation in the Mozilla work week in Portland, Oregon next week to fly to present at this conference. This is not only because I think it’s important to honour prior commitments, but because I want to encourage more literacy researchers to get involved in developing the Web Literacy Map.

I’ve drafted the talk in the style in which I’d deliver it. The idea isn’t to read it, but to use this to ensure that my presentation is backed up by slides, rather than vice-versa. I’ll then craft speaker notes to ensure I approximate what’s written here.

Click here to read the text of the draft presentation

I’d very much appreciate your feedback. Here’s the specific things I’m looking for answers to:

  • Gaps - what have I missed?
  • Structure - does it 'flow’?
  • Red flags - is there anything in there liable to cause problems/issues?

I’ve created a thread on the #TeachTheWeb discussion forum for your responses - or you can email me directly: doug@mozillafoundation.org

Thanks in advance! And remember, it doesn’t matter how new you are to the Web Literacy Map or the process of creating it. I’m interested in the views of newbies and veterans alike.

\(@ ̄∇ ̄@)/

Andrea MarchesiniSwitchy 0.9 released

Break-news: Finally I had time to update Switchy to the latest addon-sdk 1.7 and now, version 0.9.x is restart-less!

What is Switchy? Switchy is an add-on for Firefox to better manage several profiles. This add-on allows the user to create Firefox profiles, rename, delete and open them just with a click.

By using Switchy, you can open more profiles at the same time: an important feature for those who are concerned about security and privacy. For instance, you can have a separate profile for Facebook and other social networks while browsing other websites or have a separate profile for Google so you are not always logged in.

Don’t we have some similar addons? There are other similar add-ons but Switchy has extra features. You can assign websites to be exclusive for particular profiles. This means that, when from profile X I try to open one of websites saved in a specific profile, Switchy allows me to “switch” to the correct profile with just 1 click. For example, if I open ‘Facebook’ from my default profile, Switchy immediately offers me the opportunity to open the correct profile where I am logged in on Facebook - which is nice!

What is new in version 0.9? Restart-less, and a new awesome UI for the Switchy panel.

I hope you enjoy it!

François MarierHiding network disconnections using an IRC bouncer

A bouncer can be a useful tool if you rely on IRC for team communication and instant messaging. The most common use of such a server is to be permanently connected to IRC and to buffer messages while your client is disconnected.

However, that's not what got me interested in this tool. I'm not looking for another place where messages accumulate and wait to be processed later. I'm much happier if people email me when I'm not around.

Instead, I wanted to do to irssi what mosh did to ssh clients: transparently handle and hide temporary disconnections. Here's how I set everything up.

Server setup

The first step is to install znc:

apt-get install znc

Make sure you get the 1.0 series (in jessie or trusty, not wheezy or precise) since it has much better multi-network support.

Then, as a non-root user, generate a self-signed TLS certificate for it:

openssl req -x509 -sha256 -newkey rsa:2048 -keyout znc.pem -nodes -out znc.crt -days 365

and make sure you use something like irc.example.com as the subject name, that is the URL you will be connecting to from your IRC client.

Then install the certificate in the right place:

mkdir ~/.znc
mv znc.pem ~/.znc/
cat znc.crt >> ~/.znc/znc.pem

Once that's done, you're ready to create a config file for znc using the znc --makeconf command, again as the same non-root user:

  • create separate znc users if you have separate nicks on different networks
  • use your nickserv password as the server password for each network
  • enable ssl
  • say no to the chansaver and nickserv plugins

Finally, open the IRC port (tcp port 6697 by default) in your firewall:

iptables -A INPUT -p tcp --dport 6697 -j ACCEPT

Client setup (irssi)

On the client side, the official documentation covers a number of IRC clients, but the irssi page was quite sparse.

Here's what I used for the two networks I connect to (irc.oftc.net and irc.mozilla.org):

servers = (
  {
    address = "irc.example.com";
    chatnet = "OFTC";
    password = "fmarier/oftc:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
  },
  {
    address = "irc.example.com";
    chatnet = "Mozilla";
    password = "francois/mozilla:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
  }
);

Of course, you'll need to copy your znc.crt file from the server into ~/.irssi/certs/znc.crt.

Make sure that you're no longer authenticating with the nickserv from within irssi. That's znc's job now.

Wrapper scripts

So far, this is a pretty standard znc+irssi setup. What makes it work with my workflow is the wrapper script I wrote to enable znc before starting irssi and then prompt to turn it off after exiting:

#!/bin/bash
ssh irc.example.com "pgrep znc || znc"
irssi
read -p "Terminate the bouncer? [y/N] " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]
then
  ssh irc.example.com killall -sSIGINT znc
fi

Now, instead of typing irssi to start my IRC client, I use irc.

If I'm exiting irssi before commuting or because I need to reboot for a kernel update, I keep the bouncer running. At the end of the day, I say yes to killing the bouncer. That way, I don't have a backlog to go through when I wake up the next day.

Gervase MarkhamNot That Secret, Actually…

(Try searching Google Maps for “Secret Location”… there’s one in Norway, one in Toronto, and two in Vancouver!)

Karl DubostFix Your Flexbox Web site

Web compatibility issues takes many forms. Some are really hard to solve and there are sound business reasons behind them. On the other hand, some Web compatibility issues are really easy to fix with the benefits of allowing more potential market shares for the Web site. CSS Flexbox is one of those. I have written about it in the past. Let's make another practical demonstration on how to fix some of the flexbox issues.

8 Lines of CSS Code

Spoiler alert: This is the final result before and after fixing the CSS.

Screenshots of Hao123 site

How we did it? Someone had reported that the layout was broken on hao123.com on Firefox OS (Mobile). Two things are at happening. First of all, because Hao123 was not sending the mobile version to Firefox OS, we relied on User Agent overriding. Faking the Firefox Android user agent, we had access to the mobile version. Unfortunately, this version is partly tailored for -webkit- CSS properties.

Inspecting the stylesheets with the developer tools, we can easily discover the culprit.

 grep -i "display:-webkit" hao123-old.css
    display:-webkit-box;
    display:-webkit-box;
    display:-webkit-box
    display:-webkit-box;
    display:-webkit-box;
    display:-webkit-box
    display:-webkit-box;
 grep -i "flex" hao123-old.css
    -webkit-box-flex:1;

So I decided to fix it by adding

  1. display:flex; for each display:-webkit-box;
  2. flex-grow: 1; for -webkit-box-flex:1;

The thing which is amazing which this kind of fix is that the site dynamically fixes itself in your viewport as you go with it. You are literally sculpting the page. And if the company is telling why they should bother about it? Because for something that will take around 10 minutes to fix, they will suddenly have a much bigger coverage of devices… which means users… which means marketshares.

Guides For Fixing Web Compatibility Issues

I started a repository to help people fixing their own Web site with the most common issues in Web Compatibility. Your contribution on the project is more than welcome.

Otsukare.

Hannah KaneHow to Mofo

OpenMatt and I have been talking about the various ways of working at Mofo, and we compiled this list of what we think works best. What do y’all Mofos think?

When starting a new project:

  • Clearly state the problem or goal. Don’t jump ahead to the solution. Ameliorating the problem is what you’ll measure success against, not your ability to implement an arbitrary solution.
  • Explicitly state assumptions. And, whenever possible, test those assumptions before you build anything. You may have assumptions about the nature of the problem you’re trying to solve, who’s experiencing it, or your proposed solution.
  • Have clear success metrics. How will you know if you’re winning? Do you have the instruments you need to measure success?
  • Determine what resources you need. Think about design, development, content, engagement, evaluation, and ongoing maintenance. We’re working on improving the ways we allocate resources throughout the organization, but to start, be clear about what resources your project will need.
  • Produce a project brief. Detail all of the above in a single document. (Example templates here and here.) Use the project brief when you…
  • …Have a project kick-off meeting. Invite *all* the stakeholders to get involved early.

Communication:

  • Have a check-in plan. Will you have daily check-ins? Weekly email updates? How are you checking in and holding each other accountable?
  • Build a workbench and keep it updated.  We recommend a wiki page that will serve as a one-stop shop for anyone needing information about the project. Things to include: links to project briefs and notes, logistics for meetings, a timeline, a list of who’s involved, and, of course, bugs! Examples here, here, and here.
  • Put your notes in one spot.  A single canonical pad for notes and agendas. We don’t need to create a  new pad every time you have a meeting or a thought! That makes them very hard to track and find later. Examples here and here.

Doing the do:

  • Plan in two-week heartbeats. This helps us stay on track and makes it clear what the priorities are. Speaking of priorities…
  • Learn the Fine Art of Prioritizing. Hint: Not everything can be P1. The product owner or project manager should rank tasks in order of value added. Remember: prioritization is part of managing workflow. It may be true that all or most of the tasks are required for a successful launch, but that doesn’t help a developer or designer who’s trying to decide what to work on next.
  • Work with your friendly neighborhood Tactical Priorities Syndicate. The name sounds scary, but they’re here to serve you. They meet weekly to get your priorities into each two-week heartbeat process. https://wiki.mozilla.org/Webmaker/TPS

Update: To hack on the next version of this, please visit http://workopen.org/mofo (thanks to Doug for the suggestion!)


Roberto A. VitilloClustering Firefox hangs

Jim Chen recently implemented a system to collect stacktraces of threads running for more than 500ms. A summary of the aggregated data is displayed in a nice dashboard in which the top N aggregated stacks are shown according to different filters.

I have looked at a different way to group the frames that would help us identify the culprits of main-thread hangs, aka jank. The problem with aggregating stackframes and looking at the top N is that there is a very long tail of stacks that are not considered. It might very well be that by ignoring the tail we are missing out some important patterns.

So I tried different clustering techniques until I settled with the very simple solution of aggregating the traces by their last frame. Why the last frame? When I used k-means to cluster the traces I noticed that, for many of the more interesting clusters the algorithm found, most stacks had the last frame in common, e.g.:

  • Startup::XRE_Main, (chrome script), Timer::Fire, nsRefreshDriver::Tick, PresShell::Flush, PresShell::DoReflow
  • Startup::XRE_Main, Timer::Fire, nsRefreshDriver::Tick, PresShell::Flush, PresShell::DoReflow
  • Startup::XRE_Main, EventDispatcher::Dispatch, (content script), PresShell::Flush, PresShell::DoReflow

Aggregating by the last frame yields clusters that are big enough to be considered interesting in terms of number of stacktraces and are likely to explain the most common issues our users experience.

Currently on Aurora, the top 10 meaningful offending main-thread frames are in order of importance:

  1. PresShell::DoReflow accounts for 5% of all stacks
  2. nsCycleCollector::collectSlice accounts for 4.5% of all stacks
  3. nsJSContext::GarbageCollectNow accounts for 3% of all stacks
  4. IPDL::PPluginInstance::SendPBrowserStreamConstructor accounts for 3% of all stacks
  5. (chrome script) accounts for 3% all stacks
  6. filterStorage.js (Adblock Plus?) accounts for 2.7% of all stacks
  7. nsStyleSet::FileRules accounts for 2.7% of all stacks
  8. IPDL::PPluginInstance::SendNPP_Destroy accounts for 2% of all stacks
  9. IPDL::PPluginScriptableObject::SendHasProperty accounts for 2% of all stacks
  10. IPDL::PPluginScriptableObject::SendInvoke accounts for 1.7% of all stacks

Even without showing sample stacks for each cluster, there is some useful information here. The elephants in the room are clearly plugins; or should I say Flash? But just how much do “plugins” hurt our responsiveness? In total, plugin related traces account for about 15% of all hangs. It also seems that the median duration of a plugin hang is not different from a non-plugin one, i.e. between 1 and 2 seconds.

But just how often does a hang occur during a session? Let’s have a look:

hangs

The median number of hangs for a session amounts to 5; the mean is not that interesting as there are big outliers that skew the data. Also noteworthy is that the median duration of a session is about 16 minutes.

As one would expect, the number of hangs tend to increase as the duration of a session does:

hangsvsuptime

Tha analysis was run on a week’s worth of data for Aurora (over 50M stackframes) and similar results where obtained on previous weeks.

There is some work in progress to improve the status quo. Aaron Klotz’s formidable async plugin initialization is going to eliminate trace 4 and he might tackle frame 8 in the future. Furthermore, a recent improvent in cycle collection is hopefully going to reduce the impact of frame 2.


Mozilla FundraisingOfficial Mozilla Gear Is Now Open for Business

Today is the day: The new Official Mozilla Gear website is open for business: https://gear.mozilla.org/ Official Mozilla Gear is the public site where anyone can buy branded gear for their own personal use or to give to loved ones.  Consider … Continue reading

Mozilla ThunderbirdThunderbird Reorganizes at 2014 Toronto Summit

In October 2014, 22 active contributors to Thunderbird gathered at the Mozilla office in Toronto to discuss the status of Thunderbird, and plan for the future.

Toronto Contributors at 2014 Toronto Summit

Thunderbird contributors gather in Toronto to plan the future.

As background, Mitchell Baker, Chair of the Mozilla Foundation, posted in July 2012 that Mozilla would significantly reduce paid staff dedicated to Thunderbird, and asked community volunteers to move Thunderbird forward. Mozilla at that time committed several paid staff to maintain Thunderbird, each working part-time on Thunderbird but with a main commitment to other Mozilla projects. The staff commitment in total was approximately one full-time equivalent.

Over the last two years, those individuals had slowly reduced their commitment to Thunderbird, yet the formal leadership of Thunderbird remained with these staff. By 2014 Thunderbird had reached the point where nobody was effectively in charge, and it was difficult to make important decisions. By gathering the key active contributors in one place, we were able to make real decisions, plan our future governance, and move to complete the transition from being staff-led to community-led.

At the Summit, we made a number of key decisions:

  • A group of seven individuals were elected to comprise a Thunderbird Council with the authority to make decisions affecting Thunderbird. I (Kent James) am currently the Chair of this council.
  • For our next major release, Thunderbird 38 due in May 2015, we set this roadmap:
    • Folders: allow >4GByte mbox folders, plus finish support for maildir
    • Instant Messaging: Support WebRTC
    • Calendaring: Merge Lightning into Thunderbird as a shipped addon
    • Accounts: Merge the New Account Types binary addon into core, allowing new account types to be defined using addons in the future.
    • IMAP: support OAUTH authorization in GMail.
  • We agreed that Thunderbird needs to have one or more full-time, paid staff to support shipping a stable, reliable product, and allow progress to be made on frequently-requested features. To this end, we plan to appeal directly to our users for donations.
  • The Thunderbird active contributors are proud to be part of Mozilla, expect to remain part of Mozilla for the foreseeable future, and believe we have an important role to play in fulfilling the goals of the Mozilla Manifesto.

There is a lot of new energy in Thunderbird since the Summit, a number of people are stepping forward to take on some critical roles, and we are looking forward to a great next release. More help is always welcome though!

Brian R. BondyAutomated end to end testing at Khan Academy using Gecko

Developers at Khan Academy are responsible for shipping new stuff they create to khanacademy.org as it's ready. As a whole, the site is deployed several times per day. Testing deploys of khanacademy.org can take up a lot of time.

We have tons of JavaScript and Python unit tests, but they do not catch various errors that can only happen on the live site, such as Content Security Policy (CSP) errors.

We recently deployed a new testing environment for end to end testing which will result in safer deploys. End to end testing is not meant to replace manual testing at deploy time completely, but over time, it will reduce the amount of time taken for manual testing.


Which types of errors do the tests catch?

The end to end tests catch things like missing resources on pages, JavaScript errors, and CSP errors. They do not replace unit tests, and unit tests should be favoured when it's possible.


Which frameworks are we using?


We chose to implement the end to end testing with CasperJS powered by the SlimerJS engine. Actually we even have one more abstraction on top of that so that tests are very simple and clean to write.

SlimerJS is similar and mostly compatible with the more known PhantomJS, but SlimerJS is based on Firefox's Gecko rendering engine instead of WebKit. At the time of this writing, it's based on Gecko 33. CasperJS is a set of higher level APIs and can be configured to use PhantomJS or SlimerJS.

The current version of PhantomJS is based on Webkit and is too far behind to be useful to end to end tests for our site yet. There's a newer version of PhantomJS coming, but it's not ready yet. We also considered using Selenium to automate browsers to do the testing, but it didn't meet our objectives for various reasons.


What do the tests do?

They test the actual live site. They can load a list of pages, run scripts on the pages, and detect errors. The scripts emulate a user of the site who fills out forms, logs in, clicks things, waits for things, etc.

We also have scripts for creating and saving programs in our CS learning environment, doing challenges, and we'll even have some for playing videos.


Example script

Here's an example end-to-end test script that logs in, and tests a couple pages. It will return an error if there are any JavaScript errors, CSP errors, network errors, or missing resources:

EndToEnd.test("Basic logged in page load tests", function(casper, test) {
    Auth.thenLogin(casper);
    [
        [ "Home page", "/"],
        [ "Mission dashboard", "/mission/cc-sixth-grade-math"]
    ].map(function(testPage) {
        thenEcho(casper, "Loading page: " + testPage[0]);
        KAPageNav.thenOpen(casper, testPage[1]);
    });
    Auth.thenLogout(casper);
});

When are tests run?

Developers are currently prompted to run the tests when they do a deploy, but we'll be moving this to run automatically from Jenkins during the deploy process. Tests are run both on the staged website version before it is set as the default, and after it is set as the default version.

The output of tests looks like this:

Henrik SkupinFirefox Automation report – week 39/40 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 39 and 40.

Highlights

One of our goals for last quarter was to get locale testing enabled in Mozmill-CI for each and every supported locale of Firefox beta and release builds. So Cosmin investigated the timing and other possible side-effects, which could happen when you test about 90 locales across all platforms! The biggest change we had to do was for the retention policy of logs from executed builds due to disk space issues. Here we not only delete the logs after a maximum amount of builds, but also after 3 full days now. That gives us enough time for investigation of test failures. Once that was done we were able to enable the remaining 60 locales. For details of all the changes necessary, you can have a look at the mozmill-ci pushlog.

During those two weeks Henrik spent his time on finalizing the Mozmill update tests to support the new signed builds on OS X. Once that was done he also released the new mozmill-automation 2.0.8.1 package.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 39 and week 40.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 39 and week 40.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1100942] Attachment links in request.cgi should go to the attachment and not default to &action=edit
  • [1101659] Remove curtisk from the auto-cc of the sec portion of the moz project review
  • [1102420] Remove “Firefox Screen Sharing Whitelist Submission” link from new-bug page
  • [1103069] Please fix the colo-trip field for Infrastructure and Operations :: DCops
  • [1102229] custom css stylesheets are not loaded if CONCATENATE_ASSETS is false
  • [1103837] Clicking on a “Bug Bounty” attachment should edit that attachment with the bug-bounty form

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Nicholas NethercoteTwo suggestions for the Portland work week

Mozilla is having a company-wide work week in Portland next week. It’s extremely rare to have this many Mozilla employees in the same place at the same time, and I have two suggestions.

  • Write down a list of people that you want to meet. This will probably contain people you’ve interacted with online but not in person. And send an email to everybody on that list saying “I would like to meet you in person in Portland next week”. I’ve done this at previous work weeks and it has worked well. (And I did it again earlier today.)
  • During the week, don’t do much work you could do at home. This includes most solo coding tasks. If you’re tempted to do such work, stand up and try to find someone to talk to (or listen to) who you couldn’t normally talk to easily. (This is a rule of thumb; if a zero-day security exploit is discovered in your code on Tuesday morning, yes, you should fix it.) Failing that, gee, you might as well do something that you can only do in Portland.

That’s it. Have a great week!

David BoswellRadical Participation Idea: Slow Down

The Portland Coincidental Work Week is next week and we’ll be working on our plans for 2015. One of the things we want to include in our planning is Mitchell’s question about what does radical participation look like for Mozilla today?

Everyone who is interested in this question is welcome to join us next Thursday and Friday for the Participation work week. Please come with ideas you have about this question. Here is one idea I’m thinking about that feels like an important part of a radical participation plan.

Slow Down

I’ve worked at small software start-ups and I’ve worked at large volunteer-based organizations. There are many differences between the two. The speed that information reaches everyone is a major difference.

For example, I worked at a small start-up called Alphanumerica. There were a dozen of us all working together in the same small space. Here’s a picture of me in my corner (to give you an idea of how old this photo is it was taken on a digital camera that stored photos on a floppy disk.)

MVC-017F

To make sure everyone knew about changes, you could get everyone’s attention and tell them. People could then go back to work and everyone would be on the same page. In this setting, moving fast and breaking things works.

Information doesn’t spread this quickly in a globally distributed group of tens of thousands of staff and volunteers. In this setting, if things are moving too fast then no one is on the same page and coordinating becomes very difficult.

communities_map

Mozilla is not a small start-up where everyone is physically together in the same space. We need to move fast though, so how can we iterate and respond quickly and keep everyone on the same page?

Slow Down To Go Fast Later

It might seem odd, but there is truth to the idea that you can slow down now in order to go faster later. There is even research that backs this up. There’s a Harvard Business Review article on this topic worth reading—this paragraph covers the main take-aways:

In our study, higher-performing companies with strategic speed made alignment a priority. They became more open to ideas and discussion. They encouraged innovative thinking. And they allowed time to reflect and learn. By contrast, performance suffered at firms that moved fast all the time, focused too much on maximizing efficiency, stuck to tested methods, didn’t foster employee collaboration, and weren’t overly concerned about alignment

For Mozilla, would radical participation look like setting goals around alignment and open discussions? Would it be radical to look at other large volunteer-based organizations and see what they optimize for instead of using start-ups as a model?

I’m very interested to hear what people think about the value of slowing down at Mozilla as well as hearing other ideas about what radical participation looks like. Feel free to comment here, post your own blog and join us in Portland.


Armen ZambranoPinning mozharness from in-tree (aka mozharness.json)

Since mozharness came around 2-3 years ago, we have had the same issue where we test a mozharness change against the trunk trees, land it and get it backed out because we regress one of the older release branches.

This is due to the nature of the mozharness setup where once a change is landed all jobs start running the same code and it does not matter on which branch that job is running.

I have recently landed some code that is now active on Ash (and soon on Try) that will read a manifest file that points your jobs to the right mozharness repository and revision. We call this process to "pin mozhaness". In other words, what we do is to fix an external factor to our job execution.

This will allow you to point your Try pushes to your own mozharness repository.

In order to pin your jobs to a repository/revision of mozharness you have to change a file called mozharness.json which indicates the following two values:
  • "repo": "https://hg.mozilla.org/build/mozharness",
  • "revision": "production"


This is a similar concept as talos.json introduced which locks every job to a specific revision of talos. The original version of it landed in 2011.

Even though we have a similar concept since 2011, that doesn't mean that it was as easy to make it happen for mozharness. Let me explain a bit why:

  • For talos, mozharness has been checking out the right revision of talos.
  • In the case of mozharness, we can't make mozharness check itself out.
    • Well, we could but it would be a bigger mess
    • Instead we have made buildbot ScriptFactory be a bit more flexible
Coming up:
  • Enable on Try
  • Free up Ash and Cypress
    • They have been used to test custom mozharness patches and the default branch of Mozharness (pre-production)
Long term:
  • Enable the feature on all remaining Gecko trees
    • We would like to see this run at scale for a bit before rolling it out
    • This will allow mozharness changes to ride the trains
If you are curious, the patches are in bug 791924.

Thanks for Rail for all his patch reviews and Jordan for sparking me to tackle it.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Tristan NitotEn vrac du lundi

Christian HeilmannDiversifight – a talk at the diversity hackathon at Spotify Sweden

Yesterday afternoon I presented at the “Diversify” hackathon in the offices of Spotify in Stockholm, Sweden. The event was aimed at increasing diversity in IT by inviting a group of students that represented a good mix of gender and ethnic background to work together on hacks with music and music data. There was no strict competitive aspect to this hackathon and no prizes or winners – it was all about working together and seeing how a mixed group can achieve better results.

speaking at diversify
the earth needs rebels
Photos by Sofie Lindblom and Ejay Janis

When I was asked to speak at an event about diversity in IT, I was flattered but also confused. Being very white and male I don’t really have a chance to speak from a viewpoint of a group that brings diversity to the mix. But I do have a lot of experience and I looked into the matter quite a bit. Hence I put together a talk that covers a few things I see going wrong, a few ideas and tools we have to make things better by bettering ourselves and a reminder that the world of web development used to be much more diverse and we lost these opportunities. In essence, the break-neck speed of our market, the hype the press and events living on overselling the amazing world of startups and the work environments we put together seem to actively discouraging diversity. And that is what I wanted the students to consider and fight once they go out and start working in various companies.

Diversity is nothing we can install – it is something we need to fight for. And it makes no sense if only those belonging to disadvantaged groups do that.

This talk is pretty raw and unedited, and it is just a screencast. I would love to give a more polished version of it soon.

You can watch the the screencast on YouTube.

The slides are available on Slideshare.

Resources I covered in the talk:

The feedback was amazing, students really liked it and I am happy I managed to inspire a few people to think deeper about a very important topic.

A big thank you to the Spotify Street Team and especially Caroline Arkenson for having me over (and all the hedgehog photos in the emails).

Yunier José Sosa VázquezEl nuevo botón “Olvidar” de Firefox

Protege tu privacidad con el nuevo botón Olvidar, disponible solo en la última versión de Firefox. En tan solo unos clics, puedes eliminar tu historial e información personal más reciente -desde los últimos cinco minutos y hasta 24 horas- sin tocar el resto. El botón Olvidar es muy útil si usas un equipo público y quieres limpiar tu información, o si llegas a un sitio web dudoso y necesitas salir de ahí rápidamente.

Olvidar

Sino encuentras este botón en la barra de herramientas de Firefox, en el Menú  menu, elige menuPanel-customize Personalizar y arrastra el botón Olvidar hacia donde desees. Allí también puedes configurar tu navegador como más te plazca, quitando y añadiendo botones hacia el Menú o hacia la barra de herramientas.

La última versión de Firefox la puedes obtener desde nuestra Zona de Descargas para Windows, Mac, Linux y Android.

Staś MałolepszyMeet.js talk on Clientside localization in Firefox OS

Firefox OS required a fast and lean localization method that could scale up to 70 languages, cater to the needs of hundreds of developers worldwide all speaking different languages and support a wide spectrum of devices with challenging hardware specs.

At the end of September, I went to Poznań to speak about localization technology in Firefox OS at Meet.js Summit. In my talk I discussed how we had been able to create a localization framework which embraces new Web technologies like Web components and mutation observers, how we'd come up with new developer tools to make localization work easier and what exciting challenges lay ahead of us.

Botond BalloTrip Report: C++ Standards Meeting in Urbana-Champaign, November 2014

Summary / TL;DR

Project Status
C++14 Finalized and approved, will be published any day now
C++17 Some minor features so far. Many ambitious features are being explored. for (e : range) was taken out.
Networking TS Sockets library based on Boost.ASIO moving forward
Filesystems TS On track to be published early 2015
Library Fundamentals TS Contains optional, any, string_view and more. No major changes since last meeting. Expected 2015.
Library Fundamentals TS II Follow-up to Library Fundamentals TS; will contain array_view and more. In early stage, with many features planned.
Array Extensions TS Continues to be completely stalled. A new proposal was looked at but failed to gain consensus.
Parallelism TS Progressing well. Expected 2015.
Concurrency TS Progressing well. Expected 2015. Will have a follow-up, Concurrency TS II.
Transactional Memory TS Progressing well. Expected 2015.
Concepts (“Lite”) TS Progressing well. Expected 2015.
Reflection Looking at two different proposals. Too early to say anything definitive.
Graphics 2D Graphics TS based on cairo moving forward
Modules Microsoft and Clang have implementations at various stages of completeness. They are iterating on it and trying to converge on a design.
Coroutines Proposals for both stackless and stackful variants will be developed, in a single TS.

Introduction

Last week I attended another meeting of the ISO C++ Standards Committee at the Univeristy of Illinois at Urbana-Champaign. This was the third and last Committee meeting in 2014; you can find my reports on the previous meetings here (February 2014, Issaquah) and here (June 2014, Rapperswil). These reports, particularly the Rapperswil one, provide useful context for this post.

The focus of this meeting was moving forward with the various Technical Specifications (TS) that are in progress, and looking ahead to C++17.

C++14

C++14 was formally approved as an Internal Standard in August when it passed its final ballot (the “DIS”, or Draft International Standard, ballot; see my Issaquah report for a description of the procedure for publishing a new language standard).

It will take another few weeks for ISO to publish the approved standard; it’s expected to happen before the end of the year.

C++17

Strategic Vision

With C++14 being approved, the Committee is turning its attention towards what its strategic goals are for the next revision of the language standard, C++17.

As I explained in my Rapperswil report, most major new features are targeted for standardization in two steps: first, as a Technical Specification (TS), an experimental publication vehicle with no backwards-compatibility requirements, to gain implementation and use experience; and then, by incorporation into an International Standard (IS), such as C++17.

Therefore, a significant amount of the content of C++17 is expected is to consist of features being published as Technical Specifications in the near future. It’s not immediately clear which TS’s will be ready for inclusion in C++17; it depends on when the TS itself is published, and whether any concerns about it come up as it’s being implemented and used. Hopefully, at least the ones being published over the next year or so, such as Filesystems, Concepts, Parallelism, Library Fundamentals I, and Transactional Memory, are considered for inclusion in C++17.

In addition, there are some major features that do not yet have a Technical Specification in progress which many hope will be in C++17: namely, Modules and Reflection. Due to the size and scope of these features, it is increasingly likely that the committee will deem it safer to standardize these as TS’s first as well, rather than targetting them directly at C++17. In this case, there may not be time for the additional step of gaining experience with the TS and merging it into the IS in time for C++17; however, it’s too early to know with any confidence at this point.

Minor Features

That said, C++17 will certainly contain some language and library features, and some smaller ones have already made it in. I mentioned a few in my Rapperrswil report, but some new ones came out of this meeting:

  • Language features
    • The most notable and exciting feature in my books is folding expressions. These give you the ability to expand a parameter pack over a binary operator. For example, if Args is a non-type parameter pack of booleans, then Args &&... is a new expression which is the ‘and’ of all the booleans in the pack. All binary operators support this; for operators that have a logical identity element (e.g. 0 for addition), an empty pack is allowed and evaluates to that identity.
    • Another notable change was not an addition, but a removal: the terse form of the range-based for loop, for (elem : range) (which would have meant for (auto&& elem : range)), was removed. (Technically, it was never added, because the C++ working draft was locked for additions in Rapperswil while the C++14 DIS ballot was in progress. However, there was consensus in the Evolution and Core Working Groups in Rapperswil to add it, and there was wording ready to be merged to the working draft as soon as the ballot concluded and it was unlocked for C++17 additions. That consensus disappeared when the feature was put up for a vote in front of full committee in Urbana.) The reason for the removal was that in for (elem : range), there is no clear indication that elem is a new variable being declared; if there already is a variable named elem in scope, one can easily get confused and think the existing variable is being used in the loop. Proponents of the feature pointed out that there is precedent for introducing a new name without explicit syntax for declaring it (such as a type) in generalized lambda captures ([name = init](){ ... } declares a new variable named name), but this argument was not found convincing enough to garner consensus for keeping the feature.
    • std::uncaught_exceptions(), a function that allows you to determine accurately whether a destructor is being called due to stack unwinding or not. There is an existing function, std::uncaught_exception() (note the singular) that was intended for the same purpose, but was inaccurate by design in some cases, as explained in the proposal. This is considered a language feature even though it’s exposed as a library function, because implementing this function requires compiler support.
    • Attributes for namespaces and enumerators. This fills a grammatical hole in the language, where most entities could have an attribute attached to them, but namespaces and enumerators couldn’t; now they can.
    • A shorthand syntax for nested namespace definition.
    • u8 character literals.
    • A piece of official terminology, “forwarding references”, was introduced for a particular use of rvalue references. Some educators have previously termed this use “universal references”, but the committee felt the term “forwarding references” was more accurate.
    • Allowing full constant expressions in non-type template arguments. This plugs a small hole in the language where the template arguments for certain categories of non-type template parameters were restricted to be of a certain form without good reason.
  • Library features

Evolution Working Group

As usual, I spent most of my time in the Evolution Working Group (EWG), which concerns itself with the long-term evolution of the core language. In spite of there being a record number of proposals addressed to EWG in the pre-Urbana mailing, EWG managed to get through all of them.

Incoming proposals were categorized into three rough categories:

  • Accepted. The proposal is approved without design changes. They are sent on to the Core Working Group (CWG), which revises them at the wording level, and then puts them in front of the committee at large to be voted into whatever IS or TS they are targeting.
  • Further Work. The proposal’s direction is promising, but it is either not fleshed out well enough, or there are specific concerns with one or more design points. The author is encouraged to come back with a modified proposal that is more fleshed out and/or addresses the stated concerns.
  • Rejected. The proposal is unlikely to be accepted even with design changes.

Accepted proposals (note: I’m not including here the ones which also passed CWG the same meeting and were voted into the standard – see above for those):

  • Source code information capture, a proposal to provide a replacement for the __FILE__, __LINE__, and __FUNCTION__ macros that doesn’t involve the preprocessor. I think this proposal constitutes a major advance because it removes one of the main remaining uses of the preprocessor.
  • Alias-set attributes, a mechanism to pass information to the optimizer about pointer aliasing (like restrict in C, but better). Some design feedback was given, but generally the proposal was considered baked enough that the next revision can go directly to CWG.
  • A few small design changes to the Transactional Memory TS.
  • A proposal to specify that the behaviour of standard library comparison function objects for comparing pointers is consistent with the behaviour of the built-in comparison operators, where the latter behaviour is defined. This was a matter of tweaking the specification to say something that people took for granted to begin with.
  • A modification to the Concepts Lite TS: removing constexpr constraints, which were one of the kinds of constraints allowed in requires-expressions. The reason for the removal is that they are tricky to specify and implement, and have no major motivating uses.
  • A compile-time string class, templated only on the string length, which stores its data in a constexpr character array. This one was one two competing compile-time string proposals, the other one being a variadic char... template class which encodes the string contents in the template arguments themselves. The two proposals present a tradeoff between expressiveness and compile-time efficiency: one the one hand, encoding the string contents in the template arguments allows processing the string via template metaprogramming, while in the other proposal the string can only be processed with constexpr functions; on the other hand, the variadic approach involves creating lots of template instantiations for string processing, which can slow down compile times signficantly. EWG’s view was that the compile-time efficiency consideration was the more important one, especially as constexpr functions are getting more and more powerful. Therefore, the constexpr array-based proposal was selected to move forward. As the proposal has both core language and library components, it will be going to LEWG for design review of the library components before being sent to CWG and LWG.

Proposals for which further work is encouraged:

  • Destructive move, which addresses classes for which an operation that moves from an object and destroys the moved-from object at the same time is more efficient than moving and destroying separately, because the intermediate (moved-from but not yet destroyed) state would require extra object state to represent.
  • Default comparisons. Three different proposals on this topic were presented: one which would automatically give all classes comparison operators unless they opted out by =delete-ing them, or defined their own; one which would allow opting in to compiler-defined comparison operators via =default; and one which would synthesize comparison operators using reflection. As suggested by the variety of the proposals, this is a feature that everyone wants but no one can agree exactly how it should work. Design considerations that came up included opt-in vs. opt-out, special handling for certain types of fields (such as mutable fields and pointers), special handling for classes with a single member, compile-time performance, and different strengths of ordering (such as weak vs. total orders). After discussing the proposal for half a day, we ran out of time, and decided to pick up at the next meeting in Lenexa, possibly armed with revised proposals. There was one poll taken which provided fairly clear guidance on a single aspect of the proposal: there was much stronger consensus for opt-in behaviour than for opt-out.
  • A [[noreturn]] attribute for main(), designed for programs that are never meant to finish, such as some software running on embedded systems. This would allow the optimizer to remove code for running some cleanup such as the destructors of global objects. EWG liked the proposal, and sent it to CWG with one change, naming the attribute [[noexit]] instead. CWG, however, pointed out that global destructors are potentially generated by all translation units, not just the one that defines main(), and therefore the proposal is not implementable without link-time optimization. EWG discussed the proposal further, but didn’t reach any consensus, and decided to put it off until Lenexa.
  • A paper concerning violations of the zero-overhead principle in exception handling. The motivation behind this discussion was resource-constrained systems such as embedded systems, where the overhead associated with exception handling was unwelcome. The general feedback given was to try to evaluate and address such overhead in a comprehensive manner, rather than trying to avoid running into it in a few specific cases.
  • Proposals for a unified function call syntax. Two alternative proposals were presented: one for partial unification (calling non-member functions with member function call syntax), and one for complete unification (either kind of function can be called with either syntax); the latter would either involve breaking code, or having separate name lookup rules for the two syntaxes (and thus not fully achieving the intended unification in spirit). People were somewhat in favour of the first proposal, and a lot more cautious about the second. There seemed to be enough interest to encourage further exploration of the idea.
  • A proposal to allow initializer lists with elements of a move-only type. There was consensus that we want some way to do this, but no consensus for this specific approach; it was not immediately clear what a superior approach would be.
  • Overloading the member access operator (operator .), similarly to how operator -> can be overloaded. This would enable writing “smart reference” classes, much like how overloading operator -> enables writing smart pointer classes. This would be a significant new feature, and many design considerations remain to be explored; however, there was general interest in the idea.
  • Mechanisms for selecting from parameter packs. This proposal has two parts. The first part is a simple syntax for indexing into a parameter pack: if Ts is a parameter pack, and N is a compile-time integral constant, Ts.[N] is the parameter at index N in Ts (or a SFINAE-eligible error if the index N is out of range). The dot is necessary for disambiguation (if the syntax were simply Ts[N], then consider Ts[Ns]..., where Ns is a parameter pack of size equal to Ts; is this a pack of array types T_1[N_1], T_2[N_2], ..., or is it T_(N_1), T_(N_2), ...?). While people weren’t ecstatic about this syntax (the dot seemed arbitrary), there weren’t any better suggestions raised, and people preferred to have the feature with this syntax than to not have it at all. The second part of the proposal was less baked, and concerned “subsetting” a parameter pack with a pack of indices to yield a new pack; EWG encouraged further thought about this part, and suggested exploring two aspects separately: pack literals (for example 0 ...< 5 might be hypothetical syntax for a pack literal which expands to 0, 1, 2, 3, 4) and pack transformations, which are operations that take a parameter pack as input, and transform it to another parameter pack.
  • A proposal to fix a counter-intuitive aspect of the definition of “trivially copyable”.
  • Supporting custom diagnostics for SFINAE-eligible errors. This proposal aimed to resolve a long-standing deficiency in template design: you had to pick between making an incorrect use of a template SFINAE-eligible (expressing the constraint via enable_if or similar), or giving a custom diagnostic for it (expressing the constraint via a static_assert). The specific suggestion was to allow annotating a = delete-ed function with a custom error message that would be shown if it were chosen as the best match in overload resolution. EWG felt that this was a problem worth solving, but preferred a more general solution, and encouraged the author to come back with one.
  • A proposal to specify the order of evaluation of subexpressions within an expression for some types of expressions. EWG felt this change would be valuable, as the order of evaluation being currently unspecified is a common cause of surprise and bugs, but the exact rules still need some thought.
  • Another proposal for classes with runtime size. Unfortunately, EWG continues to be pretty much deadlocked on this topic:
    • People want arrays of runtime bound, together with a mechanism for them to be used as class members.
    • There is no consensus for having arrays of runtime bound without such a mechanism.
    • There are hard technical problems that need to be solved to allow classes of runtime size. One of the biggest challenges is that some platforms’ ABIs would have to be rearchitected to accomodate classes with a runtime-size data member in the middle (this includes class hierarchies where one of the subobjects that’s not at the end has a runtime-sized member at the end).
    • No one has yet come up with a comprehensive solution to these technical problems.
    • There is a divide between two ways of looking of these proposals: one is to say that stack allocation is an optimization, and implementations are free to place runtime-sized arrays on the heap in situations where placing them on the stack is too difficult; the other is to want a guarantee that the allocation is on the stack. Proponents of the second view argue that we don’t need a new syntax for the “stack allocation is an optimization” use case; we should instead improve our optimizers so they can optimize uses of std::vector and similar into stack allocations.

    Given this state of affairs, the future of classes with runtime size (and of arrays of runtime bound, which people want to tie to classes with runtime size) continues to be uncertain.

  • Inline variables. After some discussion, it became clear that this was a proposal for two separate features with a single syntax: a way to declare and initialize global constants in a header file without having to define them in a .cpp file (which is something everyone wants); and a way to define “expression aliases”. EWG expressed interest in these problems, and encouraged fleshing out separate proposals for them.
  • Categorically qualified classes. This proposal provides a mechanism to express that a class’s objects are meant to be used only as named objects, not temporaries (useful for “scope guard”-type classes), or that a class’s objects are meant to only be used as temporaries (useful for expression templates). For classes in the latter category, it’s useful to provide a mechanism to convert objects of this type to objects of another type when initializing a named variable; as such, this part of the proposal overlaps with the operator auto proposal that was discussed (and encouraged for further work) in Rapperswil. EWG felt that the two use cases (scope guards and expression templates) weren’t sufficiently similar to necessitate fixing them the same way, and that the design questions raised during the operator auto discussion weren’t adequately addressed in this proposal; encouragement was given to continue exploring the problem space, being open to different approaches for the two use cases.
  • Generalized lifetime extension. This paper outlined a set of rules for determining whether the result of an expression refers to any temporaries that appear as subexpressions, and proposed that when the result of an expression is bound to a named variable of reference type (at local scope), the temporaries referred to by the result have their lifetimes extended to the lifetime of the variable. A very limited form of this exists in C++ today; this proposal would generalize it considerably. I found this proposal to be very interesting; it has the potential to dramatically reduce the number of use-after-free errors that occur due to temporaries being destroyed earlier than we intend them to be. On the other hand, if not done carefully, the proposal would have the potential to cause programmers to be more laissez-fare about their analysis of temporary lifetimes, possibly leading to more errors. For EWG, the sticking point was that performing the refers-to analysis for function call expressions where the function body is in another translation unit requires the co-operation of the function author. The paper proposed annotating parameters with the keyword export to indicate that the function’s return value refers to this parameter. EWG didn’t like this, feeling that these annotations would be “exception specifications all over again”, i.e. components of a function declaration that are not quite part of its type, for which we need ad-hoc rules to determine their behaviour with respect to redeclarations, function pointers, overrides in derived classes, being passed as non-type template arguments, and so on. The conclusion was that the problem this proposal addresses is a problem we want solved, but that this approach was not in the right direction for solving the problem.

Rejected proposals:

  • A proposal to let return {expr} be explicit, in the sense that it would allow invoking constructors of the function’s return type even if they were explicit. This proposal had support in Rapperswil, but after several new papers argued against it, EWG decided to shelve it.
  • The proposal for named arguments that Ehsan and I wrote. While a few people in the room liked the idea, the majority had reservations about it; prominent among these were the proposal encouraging functions with many parameters, and the additional maintenance burden on library authors caused by parameter name changes breaking code.
  • A proposal for a null-coalescing conditional operator, a ?: b, which would have been equivalent to a ? a : b. EWG felt the utility wasn’t sufficiently compelling to warrant a language change.
  • Checked-dereference conditions. This would have made if (T x : expr) { S } equivalent to if (auto p = expr) { T x = *p; S } (and similarly for while loops and the test-expressions of for loops). EWG felt this shorthand wasn’t sufficiently compelling, and could cause confusion due to the similarity of the syntax to the range-based for loop.
  • A proposal for uniform handling of subobjects. This would allow data members and bases of a class to be interleaved in any order. EWG felt this change was too intrusive and insufficiently motivated.

Contracts

EWG held a special evening session on the topic of contracts, as there was a lot of interest in them at this meeting. Several papers on the topic were presented; a couple of others were not due to lack of time or a presenter.

The only proposal that was specifically considered was a proposal to turn the assert macro into a compiler-recognized operator with one of a specified set of semantics based on the value of the NDEBUG macro; it was rejected, mostly on the basis that it was infeasible to muck with assert and NDEBUG for backwards-compatibility reasons.

Other than that, the discussion was more about high-level design aspects for contract programming rather specific proposals. Some issues that came up were:

  • Where to specify the contracts for a function – in the declaration, in the implementation, or potentially either – and what the implications are.
  • Whether optimizers should be allowed to assume that contracts are obeyed, such that non-obeyance (e.g. precondition violation) implies undefined behaviour.
  • Whether the standard should specify different modes of behaviour (e.g. “release” vs. “debug”) with respect to contract checking (and if so, how to control the mode, or if this should be left implementation-defined).
  • What the behaviour should be upon contract violation (“keep going” but undefined behaviour, custm handler, terminate, throw, etc.).

The discussion closed with some polls to query the consensus of the room:

  • There was consensus that we want some form of contracts.
  • There was consensus that ensuring correctness and realizing performance gains are both important goals of a contracts proposal, with correctness being with primary one.
  • There was consensus that we need to support contracts in interfaces / declarations (at least).
  • There was no consensus for having some notion of “build modes” specified in the standard to control aspects of contract checking.

These views will likely guide future proposals on this topic.

Coroutines

Coroutines was another topic with a lot of interest at this meeting. There were three proposals on the table: “resumable functions”, “resumable lambdas”, and a library interface based on Boost.Coroutine. These proposals started out under the purview of SG 1 (Concurrency), but then they started growing into a language feature with applications unrelated to concurrency as well, so the proposals were presented in an evening session to give EWG folks a chance to chime in too.

The coroutines proposals fall into two categories: stackful and stackless, with the “resumable functions” and “resumable lambdas” proposals being variations on a stackless approach, and Boost.Coroutine proposal being a stackful approach.

The two approaches have an expressiveness/performance tradeoff. Stackful coroutines have more overhead, because a stack needs to be reserved for them; the size of the stack is configurable, but making it too small risks undefined behaviour (via a stack overflow), while making it too large wastes space. Stackless coroutines, on the other hand, use only as much space as they need by allocating space for each function call on the heap (these are called activation frames; in some cases, the heap allocation can be optimized into stack allocation). The price they pay in expressiveness is that any function that calls a resumable function (i.e. a stackless coroutine) must itself be resumable, so the compiler knows to allocate activation frames on the heap when calling it, too. By contrast, with the stackful approach, any old function can call into a stackful coroutine, because execution just switches to using the coroutine’s side stack for the duration of the call.

Within the “stackless” camp, the difference between the “resumable functions” and “resumable lambdas” approaches is relatively small. The main difference is that the “resumable lambdas” approach allows coroutines to be passed around as first-class objects (since lambdas are objects).

The authors of the “resumable functions” and Boost.Coroutine proposals have attempted to come up with a unified proposal that combines the power of “stackful” with the expressiveness of “stackless”, but haven’t succeeded, and in fact have come to believe that the tradeoff is inherent. In light of this, and since both approaches have compelling use cases, the committee was of the view that both approaches should be pursued independently, both targetting a single Coroutines Technical Specification, with the authors co-operating to try to capture any commonalities between their approaches (if nothing else then a common, consistent set of terminology) even if a unified proposal isn’t possible. For the stackless approach, participants were polled for a preference between the “resumable functions” and “resumable lambdas” approaches; there was stronger support for the “resumable functions” approach, though I think this was at least in part due to the “resumable lambdas” approach being newer and less well understood.

I had a chance to speak to Chris Kohlhoff, the author of the “resumable lambdas” proposal, susbequent to this session. He had an idea for combining the “stackless” and “stackful” approaches under a single syntax that I found very interesting, which he plans to prototype. If it pans out, it might end up as the basis of compelling unified proposal after all.

I’m quite excited about the expressivity coroutines would add to the language, and I await developments on this topic eagerly, particularly on Chris’s unified approach.

Embedded Systems

The topic of forming a Study Group to explore ways to make C++ more suitable for embedded systems came up again. In addition to the two papers presented on the topic, some further ideas in this space were containers that can be stored in ROM (via constexpr), and having exceptions without RTTI. It was pointed out that overhead reductions of this sort might be of interest to other communities, such as gaming, graphics, real-time programming, low-latency programming, and resource-constrained systems. EWG encouraged discussion across communities before forming a Study Group.

Library/Library Evolution Working Groups (LWG and LEWG)

I mentioned the library features that are targeted for C++17 in the “C++17″ section above. Here I’ll talk about progress on the Library Fundamentals Technical Specifications, and future work.

Library Fundamentals TS I

The first Library Fundamentals TS has already gone through its first formal ballot, the PDTS (Preliminary Draft Technical Specification) ballot. LWG addressed comments sent in by national standards bodies in response to the ballot; the resulting changes were very minor, the most notable being the removal of the network byte-order conversion functions (htonl() and friends) over concerns that they clash with similarly-named macros. LWG will continue addressing the comments during a teleconference in December, and then they plan to send out the specification for its DTS (Draft Technical Specification) ballot, which, if successful, will be its last before publication.

Library Fundamentals TS II

The second Library Fundamentals TS is in the active development stage. Coming into the meeting, it contained a single proposal, for a generalized callable negator. During this meeting, several new features were added to it:

There will very likely be more features added at the next meeting, in May 2015; the TS is tentatively scheduled to be sent out for its PDTS ballot at the end of that meeting.

Future Work

In addition to the proposals which have already been added into C++17 or one of the TS’s, there are a lot of other library proposals in various stages of consideration.

Proposals approved by LEWG and under review by LWG:

Proposals approved by LEWG for which LWG review is yet to start:

Proposal for which LEWG is encouraging further work:

Proposals rejected by LEWG:

There will be a special library-only meeting in Cologne, Germany in February to allow LWG and LEWG to catch up a bit on all these proposals.

Study Groups

SG 1 (Concurrency)

SG 1’s main projects are the Concurrency TS and the Parallelism TS. As with the Library Fundamentals TS, both are likely to be the start of a series of TS’s (so e.g. the Parallelism TS will be followed by a Parallelism TS II).

Besides coroutines, which I talked about above, I haven’t had a chance to follow SG 1’s work in any amount of detail, but I will mention the high-level status:

The Parallelism TS already had its PDTS ballot; comments were addressed this week, resulting in minor changes, including the addition of a transform-reduce algorithm. SG 1 will continue addressing comments during a teleconference in December, and then plans to send the spec out for its DTS ballot. As mentioned above, there are plans for a Parallelism TS II, but no proposals have been approved for it yet.

The Concurrency TS has not yet been sent out for its PDTS ballot; that is now planned for Lenexa.

Some library proposals that have been approved by LEWG for the Concurrency TS:

Task regions are still being considered by LEWG, and would likely target Concurrency TS II.

A major feature being looked at by SG 1 is executors and schedulers, with two competing proposals. The two approaches were discussed, and SG 1 felt that at this stage there’s still design work to be done and it’s too early to make a choice. This feature is targeting the second Concurrency TS as it’s unlikely to be ready in time for Lenexa, and SG 1 doesn’t want to hold up the first Concurrency TS beyond Lenexa.

Coroutines are also a concurrency feature, but as mentioned above, they are now targeting a separate TS.

SG 2 (Modules)

EWG spent an afternoon discussing modules. At this point, Microsoft and Clang both have modules implementations, at various levels of completion. The Microsoft effort is spearheaded by Gabriel Dos Reis, who summarized the current state of affairs in a presentation.

The goals of modules are:

  • componentization
  • isolation from macros
  • improving build times
  • making it easier to write semantics-aware developer tools
  • being a code distribution mechanism is, at the moment, an explicit non-goal

The aspects of a modules design that people generally agree on at this point are:

  • modules are not a scoping mechanism (i.e. they are independent of namespaces)
  • while performing template instantiation while compiling a module, the compiler has access to the full module being compiled, but only to the interfaces of imported modules
  • the interface of a module can be separated from its implementation
  • module interfaces cannot have cyclic dependencies
  • only one module owns the definition of an entity

Design points that still need further thought are:

  • visibility of private class members across module boundaries
  • ordering of static/dynamic initialization
  • can macros flow into modules? (e.g. NDEBUG)
    • one view on this is that there should be no standard way to provide an input macro to a module, but implementations can provide implementation-defined mechanisms, such as defining NDEBUG on the compiler command line to build a debug version of a module
    • another option is to “parameterize” a module on certain input parameters (such as the value of the NDEBUG macro)
      • this in turn raises the question of a more general parameterization mechanism for modules, akin to templates
  • can macros flow out of modules? (e.g. can the Boost.Preprocessor library be packaged up into a module?)
  • semantics of entities with internal linkage in a module interface
  • can a module interface be spread across several files?
  • the syntax for defining a module
  • how to deal with #includes in a module

EWG was generally pleased with the progress being made, and encouraged implementors to continue collaborating to get their designs to converge, and report back in Lenexa.

The Clang folks also reported promising performance numbers from their implementation, but detailed/comprehensive benchmarks remain to be performed.

SG 3 (Filesystems)

SG 3 did not meet in Urbana. The Filesystems TS is waiting for its DTS ballot to close; assuming it’s successful (which is the general expectation), it will be published early next year.

Proposals targeting a follow-up Filesystems TS II are welcome; none have been received so far.

SG 4 (Networking)

Organizationally, the work of SG 4 has been conducted directly by LEWG over the past few meetings. This arrangement has been formalized at this meeting, with SG 4’s chair, Kyle Kloepper, retiring, and the SG becoming “dormant” until LEWG decides to reactivate it.

In Rapperswil, LEWG had favourably reviewed a proposal for a C++ networking library based on Boost.ASIO, and asked the author (Chris Kohlhoff, whom I’ve talked about earlier in the context of coroutines) to update the proposal to leverage C++14 language features. Chris has done so, and presented an updated proposal to LEWG in Urbana; this update was also received favourably, and was voted to become the initial working draft of the Networking TS, which now joins the roster of Technical Specifications being worked on by the committee. In other words, we’re one step closer to having a standard sockets library!

SG 5 (Transactional Memory)

I haven’t been following the work of SG 5 very closely, but I know the Transactional Memory TS is progressing well. Its working draft has been created based on two papers, and it’s going to be sent out for its PDTS ballot shortly (after a review conducted via teleconference), with the intention being that the ballot closes in time to look at the comments in Lenexa.

SG 6 (Numerics)

Topics of discussion in SG 6 included:

  • a replacement for std::rand which combines the security of the C++11 <random> facilities with the simple interface of std::rand
  • special math functions for C++17
  • typedefs similar to int16_t for floating-point types
  • bignums, ratios, fixed-point arithmetic

A Numerics TS containing proposals for some of the above may be started in the near future.

There is an existing TR (Technical Report, an older name for a Technical Specification) for decimal floating-point arithmetic. There is a proposal to integrate this into C++17, but there hasn’t been any new progress on that in Urbana.

SG 7 (Reflection)

SG 7 looked at two reflection proposals: an updated version of a proposal for a set of type traits for reflecting the members of classes, unions, and enumerations, and a a significantly reworked version of a comprehensive proposal for static reflection.

The reflection type trait proposal was already favourably reviewed in Rapperswil. At this meeting, additional feedback was given on two design points:

  • Access control. There was consensus that reflection over inaccessible members should be allowed, but that it should occur via a separate mechanism that is spelt differently in the code (for example, there might be one namespace called std::reflect which provides traits for reflecting accessible members only, and another called std::reflect_invasively which provides traits for reflecting all members including inaccessible ones). The rationale is that for some use cases, reflecting only over accessible members is appropriate, while for others, reflecting over all members is appropriate, and we want to be able to spot uses of an inappropriate mechanism easily. Some people also expressed a desire to opt-out from invasive reflection on a per-class basis.
  • Syntax. The proposal’s syntax for e.g. accessing the name of the second member of a class C is std::class_member::name<C,1>. A preference was expressed a) for an additional level of grouping of reflection-related traits into a namespace or class reflect, e.g. std::reflect::class_member::name<C,1>, and b) for not delaying the provision of all inputs until the last component of the trait, e.g. std::reflect<C>::class_member<1>::name. (This last form has the disadvantage that it would actually need to be std::reflect<C>::template class_member<1>::name; some suggestions were thrown around for avoiding this by making the syntax use some compiler magic (as the traits can’t be implemented purely as a library anyways)).

It was also reiterated that this proposal has some limitations (notably, member templates cannot be reflected, nor can members of reference or bitfield type), but SG 7 remains confident that the proposal can be extended to fill these gaps in due course (in some cases with accompanying core language changes).

The comprehensive static reflection proposal didn’t have a presenter, so it was only looked at briefly. Here are some key points from the discussion:

  • This proposal allows reflection at a much greater level of detail – often at the level of what syntax was used, rather than just what entities were declared. For example, this proposal allows distinguishing between the use of different typedefs for the same type in the declaration of a class member; the reflection type traits proposal does not.
  • No one has yet explored this area enough to form a strong opinion on whether having access to this level of detail is a good thing.
  • SG 7 is interested in seeing motivating use cases that are served by this proposal but not by the reflection type traits proposal.
  • Reflecting namespaces – a feature included in this proposal – is viewed as an advanced reflection feature that is best left off a first attempt at a reflection spec.
  • The author is encouraged to do further work on this proposal, with the above in mind. Splitting the proposal into small components is likely to help SG 7 make progress on evaluating it.

There is also a third proposal for reflection, “C++ type reflection via variadic template expansion”, which sort of fell off SG 7’s radar because it was in the post-Issaquah mailing and had no presenter in Rapperswil or Urbana; SG 7 didn’t look at it in Urbana, but plans to in Lenexa.

SG 8 (Concepts)

The Core Working Group continued reviewing the Concepts TS (formerly called “Concepts Lite”) in Urbana. The fundamental design has not changed over the course of this review, but many details have. A few changes were run by EWG for approval (I mentioned these in the EWG section above: the removal of constexpr constraints, and the addition of folding expressions). The hope was to be ready to send out the Concepts TS for its PDTS ballot at the end of the meeting, but it didn’t quite make it. Instead, CWG will continue the review via teleconferences, and possibly a face-to-face meeting, for Concepts only, in January. If all goes well, the PDTS ballot might still be sent out in time for the comments to arrive by Lenexa.

SG 9 (Ranges)

As far as SG 9 is concerned, this has been the most exciting meeting yet. Eric Niebler presented a detailed and well fleshed-out proposal for integrating ranges into the standard library.

Eric’s ranges are built on top of iterators, thus fitting on top of today’s iterator-based algorithms almost seamlessly, with one significant change: the begin and end iterators of a range are not required to be of the same type. As the proposal explains, this small change allows a variety of ranges to be represented efficiently that could not be under the existing same-type model, including sentinel- and predicate-based ranges.

The main parts of the proposal are a set of range-related concepts, a set of range algorithms, and a set of range views. The foundational concept is Iterable, which corresponds roughly to what we conversationally call (and also what the Boost.Range library calls) a “range”. An Iterable represents a range of elements delimited by an Iterator at the beginning and a Sentinel at the end. Two important refinements of the Iterable concept are Container, which is an Iterable that owns its elements, and Range, which is a lightweight Iterable that doesn’t own its elements. The range algorithms are basically updated versions of the standard library algorithms that take ranges as Iterables; there are also versions that take (Iterator, Sentinel) pairs, for backwards-compatibiltiy with today’s callers. Finally, the range views are ways of transforming ranges into new ranges; they correspond to what the Boost.Range library calls range adaptors. There is also a suggestion to enhance algorithms with “projections”; I personally see this as unnecessary, since I think range views serve their use cases better.

Eric has fully implemented this proposal, thus convincingly demonstrating its viability.

Importantly, this proposal depends on the Concepts TS to describe the concepts associated with ranges and define algorithms and views in terms of these functions. (Eric’s implementation emulates the features of the Concepts TS with a C++11 concepts emulation layer.)

The proposal was overall very well received; there was clear consensus that Eric should pursue the high-level design he presented and come back with a detailed proposed specification.

An important practical point that needed to be addressed is that this proposal is not 100% backwards-compatible with the current STL. This wasn’t viewed as a problem, as previous experience trying to introduce C++0x concepts to the STL while not breaking anything has demonstrated that this wasn’t possible without a lot of contortions, and people have largely accepted that a clean break from the old STL is needed to build a tidy, concepts-enabled “STL 2.0″. Eric’s proposal covers large parts of what such an STL 2.0 would look like, so there is good convergence here. The consensus was that Eric should collaborate with Andrew Sutton (primary author and editor of the Concepts TS) on a proposal for a Technical Specification for a concepts-enabled ranges library; the exact scope (i.e. whether it will be just a ranges library, or a complete STL overhaul) is yet to be determined.

SG 10 (Feature Test)

The Feature Test Standing Document (the not-quite-a-standard document used by the committee to specify feature test macros) has been updated with C++14 features.

The feature test macros are enjoying adoption by multiple implementors, including GCC, Clang, EDG, and others.

SG 12 (Undefined Behaviour)

SG 12 looked at:

SG 13 (I/O, formerly “Graphics”)

SG 13 has been working on a proposal for 2D Graphics TS based on cario’s API. In Urbana, an updated version of this proposal which included some proposed wording was presented to LEWG. LEWG encouraged the authors to complete the wording, and gave a couple of pieces of design advice:

  • If possible, put in place some reasonable defaults (e.g., a default foreground color that’s in place if you don’t explicitly set a foreground color) so a “Hello world” type program can be written more concisely.
  • Where the API differs from a mechanical transliteration of the cairo API, document the rationale for the difference.

Next Meeting

The next full meeting of the Committee will be in Lenexa, Kansas, the week of May 4th, 2015.

There will also be a library-only meeting in Cologne, Germany the week of Feberuary 23rd, and a Concepts-specific meeting in Skillman, New Jersey from January 26-28.

Conclusion

This was probably the most action-packed meeting I’ve been to yet! My personal highlights:

  • The amount of interest in coroutines, and the green-light that was given to develop proposals for both the stackful and stackless versions. I think coroutines have the potential to revolutionize how C++ programmers express control flow in many application domains.
  • Eric Niebler’s proposal for bringing ranges to the standard library. It’s the first cohesive proposal I’ve seen that addresses all the tough practical questions involved in such an endeavour, and it was accordingly well-received.
  • The continuing work on modules, particularly the fact that Microsoft and Clang both have implementations in progress and are cooperating to converge on a final design.

Stay tuned for further developments!


PomaxNew Entry

%3Cp%3Eclick%20the%20entry%20to%20start%20typing%3C%2Fp%3E%0A

PomaxRSS description testing

My RSS generator wasn't adding article bodyies to the RSS, which caused some problems for certain RSS readers. Let's see if this fixes it.

Patrick McManusProxy Connections over TLS - Firefox 33

There have been a bunch of interesting developments over the past few months in Mozilla Platform Networking that will be news to some folks. I've been remiss in not noting them here. I'll start with the proxying over TLS feature. It landed as part of Firefox 33, which is the current release.

This feature is from bug 378637 and is sometimes known as HTTPS proxying. I find that naming a bit ambigous - the feature is about connecting to your proxy server over HTTPS but it supports proxying for both http:// and https:// resources (as well as ftp://, ws://, and ws:/// for that matter). https:// transactions are tunneled via end to end TLS through the proxy via the CONNECT method in addition to the connection to the proxy being made over a separate TLS session.. For https:// and wss:// that means you actually have end to end TLS wrapped inside a second TLS connection between the client and the proxy.

There are some obvious and non obvious advantages here - but proxying over TLS is strictly better than traditional plaintext proxying. One obvious reason is that it provides authentication of your proxy choice - if you have defined a proxy then you're placing an extreme amount of trust in that intermediary. Its nice to know via TLS authentication that you're really talking to the right device.

Also, of course the communication between you and the proxy is also kept confidential which is helpful to your privacy with respect to observers of the link between client and proxy though this is not end to end if you're not accessing a https:// resource. Proxying over TLS connections also keep any proxy specific credentials strictly confidential. There is an advantage even when accessing https:// resources through a proxy tunnel - encrypting the client to proxy hop conceals some information (at least for that hop) that https:// normally leaks such as a hostname through SNI and the server IP address.

Somewhat less obviously, HTTPS proxying is a pre-requisite to proxying via SPDY or HTTP/2. These multiplexed protocols are extremely well suited for use in connecting to a proxy because a large fraction (often 100%) of a clients transactions are funneled through the same proxy and therefore only 1 TCP session is required when using a prioritized multiplexing protocol. When using HTTP/1 a large number of connections are required to avoid head of line blocking and it is difficult to meaningfully manage them to reflect prioritization. When connecting to remote proxies (i.e. those with a high latency such as those in the cloud) this becomes an even more important advantage as the handshakes that are avoided are especially slow in that environment.

This multiplexing can really warp the old noodle to think about after a while - especially if you have multiple spdy/h2 sessions tunneled inside a spdy/h2 connection to the proxy. That can result in the top level multiplexing several streams with http:// transactions served by the proxy as well as connect streams to multiple origins that each contain their own end to end spdy sessions carrying multiple https:// transactions.

To utilize HTTPS proxying just return the HTTPS proxy type from your FindProxyForURL() PAC function (instead of the traditional HTTP type). This is compatible with Google's Chrome, which has a similar feature.

function FindProxyForURL(url, host) {
  if (url.substring(0,7) == "http://") {
   return "HTTPS proxy.mydomain.net:443;"
  }
  return "DIRECT;"
}


Squid supports HTTP/1 HTTPS proxying. Spdy proxying can be done via Ilya's node.js based spdy-proxy. nghttp can be used for building HTTP/2 proxying solutions (H2 is not yet enabled by default on firefox release channels - see about:config network.http.spdy.enabled.http2 and network.http.spdy.enabled.http2draft to enable some version of it early). There are no doubt other proxies with appropriate support too.

If you need to add a TOFU exception for use of your proxy it cannot be done in proxy mode. Disable proxying, connect to the proxy host and port directly from the location bar and add the exception. Then enable proxying and the certificate exception will be honored. Obviously, your authentication guarantee will be better if you use a normal WebPKI validated certificate.

Kevin NgoPushing Hybrid Mobile Apps to the Forefront

Mozilla Festival 2014 was held in London in October.

At Mozilla Festival 2014, I facilitated a session on Pushing Hybrid Mobile Apps to the Forefront. Before, I had been building a poker app to keep track of my poker winning statistics, record notes on opponents, and crunch poker math. I used the web as a platform, but having an iPhone, wanted this app to be on iOS. Thus, the solution was hybrid mobile apps, apps written in HTML5 technologies that are wrapped to run "natively" on all platforms (e.g., iOS, Android, FirefoxOS).

I stumbled upon the Ionic hybrid mobile app framework. This made app development so easy. IT fulfills the promise of the web: write once, run everywhere. In being with Mozilla for over two years, I've read so little hype for hybrid mobile apps. Hybrid mobile apps have potential to convert much more native developers over to the web platform, but hybrid mobile apps aren't getting the ad-time they deserve.

What is a Hybrid Mobile App?

Hybrid mobile apps, well explained in this article from Telerik, are apps written in HTML5 technologies that are enabled to run within a native container. They use the device's browser engine to render the app. And then web-to-native polyfill can be injected, prominently Cordova, in order to access device APIs.

The Current Lack of Exposure for Hybrid Mobile Apps

In all of the Mozilla Developer Network (MDN), there are around three articles on hybrid mobile apps, which aren't really fully fleshed and in need of technical review. There's been a good amount of work from James Longster in the form of Cordova Firefox OS support. There could be more to be done on the documentation side.

Cross-platform capability on mobile should be flaunted more. In MDN's main article on Open Web Apps, there's a list of advantages on open web apps. This article is important since it is a good entry point to into developing web apps. The advantages listed shouldn't really be considered advantages relative to native apps:

  • Local installation and offline storage: to a developer, these should be inherent to an app, not an explicit advantage. Apps are expected to be installable and have offline storage.
  • Hardware access: also should be inherent to an app and not an explicit advantage. Apps are expected to be able to communicate with its device APIs.
  • Breaking the walled gardens: there are no "walls" being broken if these web apps only run in the browser and FirefoxOS. They should be able to live inside the App Store and Play Store to really have any effect.
  • Open Web App stores: well, that is prety cool actually. I built a personal app that I didn't want to be distributed except with me and antoher. So I simply built a page that had the ability to install the app. However, pure web apps alone can't be submitted it to App Store or Play Store so that should be addressed first.

What's missing here is the biggest advantage of all: being able to run cross-platform (e.g., iOS, Android, FirefoxOS, Windows). That's the promise the web, and that's what attracts most developers to the web in the first place. Write it once, run anywhere, no need to port between languages or frameworks, and still be able to submit to the App Store/Play Store duopoly for to gain the most users. For many developers, the web is an appropriate platform, saving time and maintenance.

Additionally, most developers also prefer the traditional idea of apps, that they are packaged up and uploaded to the storefront, rather than self-hosted on a server. On the Firefox Marketplace, the majority of apps are packaged over hosted (4800 to 4100).

There's plenty of bark touting the cross-platform capability of the web, but there's little bite on how to actually achieve that on mobile. Hybrid mobile apps have huge potential to attact more developers to the web platform. But with its lack of exposure, it's wasted potential.

So what can we do? The presence of hybrid mobile apps on MDN could be buffed. I've talked to Chris Mills of the MDN team at Mozfest, and he mentioned it was a goal for 2015. FirefoxOS Cordova plugins may welcome contributors. And I think the biggest way would be to help add official FirefoxOS support to Ionic, a popular hybrid mobile app framework which currently has over 11k stars. They've mentioned they have FirefoxOS on the roadmap.

Building with Ionic

Ionic Framework is a hybrid mobile app framework It has a beautifully designed set of native-like icons and CSS components, pretty UI transitions, web components (through Angular directives for now), build tools, and an easy-to-use command-line interface.

With Ionic, I built my poker app I initially mentioned. It installs on my phone, and I can use it at the tables:

Poker app

Poker app built with Ionic.

For the Mozfest session, I generated a sample app with Ionic (that simply just makes use of the camera), and put it on Github with instructions. To get started with a hybrid mobile app:

  • npm install -g ionic cordova
  • ionic start myApp tabs - creates a template app
  • cordova plugin add org.apache.cordova.camera - installs the Cordova camera plugin (there are many to choose from)
  • ionic platform add <PLATFORM> - where could be ios, android, or firefoxos. This enables the platforms
  • ionic platform build <PLATFORM> - builds the project

To emulate it for iOS or Android:

  • ionic emulate <PLATFORM> - will open the app in XCode for ios or adbtools for android

To simulate it for FirefoxOS, open the project with WebIDE inside platforms/firefoxos/www.

How the Mozfest Session Went

It was difficult to plan since Mozfest is more of a hands-on unconference, where everything is meant to be hands-on and accessible. Mozfest wasn't a deeply technical conference so I tried to cater to those who don't have much development experience and to those who don't bring a laptop.

Thus I set up three laptops: my Macbook, a Thinkpad, and a Vaio. And had three devices: my iPhone, a Nexus 7, and a FirefoxOS Flame. My Macbook would help to demonstrate the iOS side. Whereas the other machines had Linux Mint within a VirtualBox. These VMs had adbtools and Firefox with WebIDE set up. All the mobile devices had the demo apps pre-installed so people could try it out.

I was prepared as a boy scout. Well, until my iPhone was pickpocketed in London, stripping me of the iOS demonstration. Lugging around three laptops in my bag that probably amounted to 20 pounds back and forth between the hotel, subway, and venue wasn't fun. I didn't even know what day I was going to present at Mozfest. Then I didn't even use those meticulously prepared laptops at the session. Everyone who showed up was pretty knowledgable, had a laptop, and had an internet connection.

The session went well nonetheless. After a bit of speech about pushing hybrid mobile apps to the forefront, my Nexus 7 and Flame were passed around to demo the sample hybrid mobile app running. It just had a simple camera button. That morning, everyone had received a free Firefox Flame for attending Mozfest so it turned more into WebIDE session on how to get an app on the Flame. My coworker who attended was able to get the accelerometer working with a "Shake Me / I was shaken." app, and I was able to get geolocation working with an app that displays longitude and latitude coordinates with the GPS.

What I Thought About Mozfest

There was a lot of energy in the building. Unfortunately, the energy didn't reach me, especially since I was heavily aircraft-latencied. Maybe conferences aren't my thing. The place was hectic. Hard to find out what was where. I tried to go to a session that was labeled as "The 6th Floor Hub", which turned out to be a small area of a big open room labelled with a hard-to-spot sign that said "The Hub". When I got there, there was no session being held despite the schedule saying so as the facilitator was MIA.

The sessions didn't connect with me. Perhaps I wanted something more technical and concrete that I could takeaway and use, but most sessions were abstract. There was a big push for Mozilla Webmaker and Appmaker, though those aren't something I use often. They're great teaching tools, but I usually direct to Codecademy for those who want to learn to build stuff.

There was a lot of what I call "the web kool-aid". Don't get me wrong, I love the web, I've drank a lot of the kool-aid, but there was a lot of championing of the web in the keynotes. I guess "agency" is the new buzzword now. Promoting the web is great, though I've just heard it all before.

However, I was glad to add value to those who found it more inspiring and motivating than me. I believe my session went well and attendees took away something hard and practical. As for me, I was just happy to get back home after a long day of travel and go replace my phone.

Soledad Penades“Invest in the future, build for the web!”, take 2, at OSOM

I am right now in Cluj-Napoca, in Romania, for OSOM.ro, an small totally non profit volunteer-organised conference. I gave an updated, shorter revised version of the talk I gave at Amsterdam past June. As usual here are the slides and the source for the slides.

It is more or less the same, but better, and I also omitted some sections and spoke a bit about Firefox Developer Edition.

Also I was wearing this Fox-themed sweater which was imbuing me with special powers for sure:

fox sweater

(I found it at H & M past Saturday, there are more animals if foxes aren’t your thing).

There were some good discussions about open source per se, community building and growing. And no, talks were not recorded.

I feel a sort of strange emptiness now, as this has been my last talk for the year, but it won’t be long until other commitments fill that vacuum. Like MozLandia—by this time next week I’ll be travelling to, or already in, Portland, for our work week. And when I’m back I plan to gradually slide into a downward spiral into the idleness. At least until 2015.

Looking forward to meeting some mozillians I haven’t met yet, and also visiting Ground Kontrol again and exploring new coffee shops when we have a break in Portland, though :-)

flattr this!

Kevin Ngo'Card Not Formatted' Error on Pentax Cameras with Mac OSX Card Reader

With some 64GB SDHC and SDXC cards on Pentax (and possibly other) cameras, you might get a 'Card Not Formatted' error. It may happen if you take some shots, plug the SD card into your Mac's card reader, upload the shots, and then unplug it. I've seen the error on my K30 and K3. Though, it's not an issue with the camera or the card.

The issue is with unplugging it. With some SD cards on OSX, the SD card has to be properly ejected rather than straight-up unplugging it. Or else it'll be in some sort of weirdly formatted state. That may be obvious, but I never ran into issues unplugging cards before.

If you hit the error, you don't have to reformat the card. Simply plug it back into your machine, eject it, and then everything will have properly torn down for the card to be usable.

Gregory SzorcTest Drive the New Headless Try Repository

Mercurial and Git both experience scaling pains as the number of heads in a repository approaches infinity. Operations like push and pull slow to a crawl and everyone gets frustrated.

This is the problem Mozilla's Try repository has been dealing with for years. We know the solution doesn't scale. But we've been content kicking the can by resetting the repository (blowing away data) to make the symptoms temporarily go away.

One of my official goals is to ship a scalable Try solution by the end of 2014.

Today, I believe I finally have enough code cobbled together to produce a working concept. And I could use your help testing it.

I would like people to push their Try, code review, and other miscellaneous heads to a special repository. To do this:

$ hg push -r . -f ssh://hg@hg.gregoryszorc.com/gecko-headless

That is:

  • Consider the changeset belonging to the working copy
  • Allow the creation of new heads
  • Send it to the gecko-headless repo on hg.gregoryszorc.com using SSH

Here's what happening.

I have deployed a special repository to my personal server that I believe will behave very similarly to the final solution.

When you push to this repository, instead of your changesets being applied directly to the repository, it siphons them off to a Mercurial bundle. It then saves this bundle somewhere along with some metadata describing what is inside.

When you run hg pull -r on that repository and ask for a changeset that exists in the bundle, the server does some magic and returns data from the bundle file.

Things this repository doesn't do:

  • This repository will not actually send changesets to Try for you.
  • You cannot hg pull or hg clone the repository and get all of the commits from bundles. This isn't a goal. It will likely never be supported.
  • We do not yet record a pushlog entry for pushes to the repository.
  • The hgweb HTML interface does not yet handle commits that only exist in bundles. People want this to work. It will eventually work.
  • Pulling from the repository over HTTP with a vanilla Mercurial install may not preserve phase data.

The purpose of this experiment is to expose the repository to some actual traffic patterns so I can see what's going on and get a feel for real-world performance, variability, bugs, etc. I plan to do all of this in the testing environment. But I'd like some real-world use on the actual Firefox repository to give me peace of mind.

Please report any issues directly to me. Leave a comment here. Ping me on IRC. Send me an email. etc.

Update 2014-11-21: People discovered a bug with pushed changesets accidentally being advanced to the public phase, despite the repository being non-publishing. I have fixed the issue. But you must now push to the repository over SSH, not HTTP.

Asa DotzlerFlame Distribution Update

About three weeks ago, I ran out of Flame inventory for Mozilla employees and key volunteer contributors. The new order of Flames is arriving in Mountain View late today (Friday) and I’ll be working some over the weekend, but mostly Monday to deliver on the various orders you all have placed with me through email and other arrangements.

If you contacted me for a Flame or a batch of Flames, expect an email update in the next few days with information about shipping or pick-up locations and times. Thanks for your patience these last few weeks. We should not face any more Flame shortages like this going forward.