Wikimedia blog

News from the Wikimedia Foundation and about the Wikimedia movement

Posts Tagged ‘QA’

Get introduced to Internationalization engineering through the MediaWiki Language Extension Bundle

The MediaWiki Language Extension Bundle (MLEB) is a collection of MediaWiki extensions for various internationalization features. These extensions and the Bundle are maintained by the Wikimedia Language Engineering team. Each month, a new version of the Bundle is released.

The MLEB gives webmasters who run sites with MediaWiki a convenient solution to install, manage and upgrade language tools. The monthly release cycle allows for adequate testing and compatibility across the recent stable versions of MediaWiki.

A plate depicting text in Sanskrit (Devanagari script) and Pali languages, from the Illustrirte Geschichte der Schrift by Johann Christoph Carl Faulmann

The extensions that form MLEB can be used to create a multilingual wiki:

  • UniversalLanguageSelector — allows users to configure their language preferences easily;
  • Translate — allows a MediaWiki page to be translated;
  • CLDR — is a data repository for language-specific locale data like date, time, currency etc. (used by the other extensions);
  • Babel — provides information about language proficiency on user pages;
  • LocalisationUpdate — updates MediaWiki’s multilingual user interface;
  • CleanChanges — shows RecentChanges in a way that reflects translations more clearly.

The Bundle can be downloaded as a tarball or from the Wikimedia Gerrit repository. Release announcements are generally made on the last Wednesday of the month, and details of the changes can be found in the Release Notes.

Before every release, the extensions are tested against the last two stable versions of MediaWiki on several browsers. Some extensions, such as UniversalLanguageSelector and Translate, need extensive testing due to their wide range of features. The tests are prepared as Given-When-Then scenarios, i.e. an action is checked for an expected outcome assuming certain conditions are met. Some of these tests are in the process of being automated using Selenium WebDriver and the remaining tests are run manually.

The automated tests currently run only on Mozilla Firefox. For the manual test runs, the Given-When-Then scenarios are replicated across several web browsers. These are mostly the Grade-A level supported browsers. Regressions or bugs are reported through Bugzilla. If time permits, they are also fixed before the monthly release, or otherwise scheduled to be fixed in the next one.

The MLEB release process allows several opportunities for participation in the development of internationalization tools. The testing workflow introduces the participants to the features of the commonly-used extensions. Finding and tracking the bugs on Bugzilla familiarizes them with the bug lifecycle and also provides an opportunity to work closely with the developers while the bugs are being fixed. Creating a patch of code to fix the bug is the next exciting step of exploration that the new participants are always encouraged to continue.

If you’d like to participate in testing, we now have a document that will help you get started with the manual tests. Alternatively, you could also help in writing the automated tests (using Cucumber and Ruby). The newest version of MLEB has been released and is now ready for download.

Runa Bhattacharjee
Outreach and QA coordinator, Language Engineering, Wikimedia Foundation

Help us test and investigate VisualEditor

We need your help to test VisualEditor and uncover bugs before we enable it on more wikis.

Hammer_-_Noun_project_1306.svgWe need your help to test VisualEditor and uncover bugs before we enable it on more wikis.

One of the most important and challenging software development projects at the Wikimedia Foundation right now is VisualEditor: a rich-text editor for Wikipedia that does not require users to learn MediaWiki’s markup syntax. Today, we need your help to make it more robust and reliable.

The alpha version of VisualEditor enabled on the English Wikipedia in December was focused on basic functionality. We’re now moving toward supporting more complex editing operations, notably involving non-Latin characters and character sets.

In order for all language editions of Wikipedia and its sister projects to benefit from VisualEditor, we need to test it extensively, and we need your help to break it (and fix it) before we enable it everywhere.

Non-Latin characters (like math symbols: ⟂) and scripts (like Chinese: 嘗試, and Hebrew: סה) can be more difficult to support than the set of Latin characters we use for example in English.

Starting today (Monday, January 28th, 2013) and continuing all week long, we need your help to test how VisualEditor functions when working with non-Latin characters. We’re relatively confident that VisualEditor can reliably load a wiki page and save that page without losing any information. What is less clear is whether it behaves properly when manipulating non-Latin text, special characters, and other less common aspects of the greater set of Unicode characters.

If you care at all about VisualEditor, internationalization and localization, accessibility, or you simply enjoy hunting down bugs in software, join us this week to identify those issues! You’ll help to improve VisualEditor before it’s enabled more widely.

Our test plan should tell you everything you need to know to get started. We’re also available on IRC for real-time collaboration; all the details are in our coordination page.

The Wikimedia Foundation’s software development model is iterative: we release software early, get feedback, improve it, get more feedback, etc. We’ve set up a dedicated group for this kind of testing that you may want to join. At this time, thoughtful feedback about how VisualEditor manages non-Latin characters is crucial to the next steps of our new editor. We hope to take these steps with you.

Chris McMahon, QA Lead

How do you establish a QA & Testing practice for an open community?

To keep up with the growth of Wikipedia and its community, one goal of the engineering team at the Wikimedia Foundation for this year is to establish a Quality Assurance (QA) practice for software development, including MediaWiki itself, extensions, and also projects like Article Feedback and Editor Engagement. But how do you establish a QA & Testing practice for a development process that involves so many contributors, with code coming in from so many sources and projects?

In software development, QA is often conflated with software testing, but testing is only a small part of QA in general. The goal of modern software testing is not only to discover defects, but also to investigate software in order to provide valuable information about that software from every point of view, from the user experience to how the software is designed. On the other hand, Quality Assurance is process work, examining the process by which the software is created, from design to code to test to release and beyond.

Dozens of (volunteer and paid) developers contribute code to Mediawiki every month, in areas as varied as MediaWiki’s core, MediaWiki extensions, and localization. Thousands of power users on Wikimedia’s wikis can also contribute code directly on the sites, in the form of JavaScript “gadgets”. With so many entry points for fast-paced development, starting a QA/testing practice is challenging. Our strategy is to focus on two areas: test automation; and building a testing community. We’re hiring people to coordinate these two areas.

As QA Lead, I have created an example of what I believe to be the best available test automation “stack”, to pave the way and start the process of what I intend to be a reference implementation, an industry standard for high-quality browser test automation. We’re now hiring a QA Engineer whose primary responsibility will be to create and maintain browser-level test automation. In the course of creating those automated tests, we will be improving our use of the source code repository recently migrated from Subversion to git, we will be improving the beta labs test environment, and we will be expanding the use of our Continuous Integration in Jenkins.

But test automation isn’t everything, and we also have an opportunity to apply the Wikimedia community’s expertise in online volunteer collaboration to software quality. We’ve already started to explore this path with success: in May, we collaborated with Weekend Testing to validate the new frequent release schedule of MediaWiki to Wikimedia sites. Weekend Testing is an established global group of professional software testers who gather online every month for a different testing project, and testing Mediawiki versions on Wikipedia was a complex effort, executed well. In June, we collaborated with OpenHatch.org to test a near-final version of the new Article Feedback system that will be released to all of Wikipedia in the coming weeks. OpenHatch is an organization
dedicated to matching interested participants in open source software to projects that need participants. This was the first testing project for OpenHatch, and it went well; Article Feedback is much improved because of it.

We are now hiring a Volunteer QA Coordinator, who will be working to create a culture of quality testing and investigation of software related to Wikimedia, both within the Wikimedia community itself, and in collaboration with the greater software testing culture. And we are already planning future activities with both Weekend Testing and OpenHatch.

My first few months as QA Lead at the Wikimedia Foundation have been devoted to creating an environment where the QA Engineer and the Volunteer QA Coordinator will thrive. I am really looking forward to collaborating with the talented people we will hire for these roles. My own role will be shifting as these new practices start to take hold. I will be looking to the future, to bring in innovative and creative approaches to software QA and testing of the highest possible quality.

Chris McMahon
QA Lead Engineer

Collaborative software testing: you can help

On May 5th, 2012, the Weekend Testing Americas group were invited to focus on testing Wikimedia sites during their monthly two-hour session.

The objective was to identify issues related to the new faster deployment schedule of MediaWiki to Wikimedia sites. The timing was excellent, as version 1.20wmf2 of MediaWiki had been deployed to all non-Wikipedia sites (like Wikisource and Wikimedia Commons), while all of the Wikipedias were still running 1.20wmf1.

Weekend Testing is “a platform for software testers to collaborate, test various kinds of software, foster hope, gain peer recognition, and be of value to the community”.

In less than two hours, the Weekend Testers reported fourteen issues in Bugzilla, based on a test plan I had prepared. In general, the quality of the reports was quite good. The WTA team posted an experience report of the session, as well as a full transcript (PDF, 142 KiB).

Building on the success of the Weekend Testing event, we will be collaborating with OpenHatch on another community testing event on June 9th, 2012, with the aim of discovering issues with the new Article Feedback Tool (AFT). As with MediaWiki 1.20, the timing for testing AFT is particularly opportune, as the software will be nearly in its final state, and the AFT team will be in a position to address issues found in the final stages of AFT development.

Collaborative software testing by the community is in an early, experimental stage at the Wikimedia Foundation, but based on the success so far, we expect to see more such events in the future. And you are welcome to join us and OpenHatch on 9 June to help test the Article Feedback Tool!

Chris McMahon
QA Lead