As the WinRT API is laced with async methods, having a robust approach to testing async behavior is very helpful for producing a robust app. Here are a few keys suggestions.

First, test re-entering a page multiple times when that page starts async operations. That is, navigate to a page that starts those operations, navigate away, then navigate forward again. Repeat a few times. Ideally what's happening in the code is that any unfinished async operation is canceled then you leave the page. Otherwise this test can produce redundant operations that could collide.

More generally, the suggestion above means to plan and test for cancelation. With this, look at behaviors in your UI that depend on async operations either being started or completing, such as enabling/disabling controls.

You can also insert random delays in your code for async calls. In JavaScript this would mean creating a promise with WinJS.Promise.timeout(<delay>) and adding that to your chain. In C#/VB, use await Task.Delay inside an #ifdef DEBUG.

In JavaScript, because we don't have #ifdef and because a new promise inserted into a chain needs to deliver meaningful results from the previous step in the chain, a good approach is to create a function that has as its return value a function that can serve as a completed handler in the chain. For example, assume you have a chain like this:

operation1().then(function (result1) {
    return operation2(result1)
}).then(function (result2) {
    return operation3(result2);
}).done();
 

You can create a function like this to create a delaying completed handler, where the debugMode flag would be a global variable that determines whether to create a delay promise with the previous result in the chain, or just deliver it through WinJS.Promise.as which adds very minimal overhead:

function insertDelay(delay) {
    return function (result) {
        if (debugMode) {
            return WinJS.Promise.timeout(delay).then(function () { return result; });
        } else {
            return WinJS.Promise.as(result);
        }
    }
}

Inserting into the chain would be like this:

operation1().then(function (result1) {
    return operation2(result1)
}).then(insertDelay(1000))
.then(function (result2) {
    return operation3(result2);
}).done();

 


When testing your app, keep a keen eye out for how and when progress indicators are showing up, or how they aren't showing up for that matter. Generally speaking progress indicators show that some long-running process is happening or that the app is waiting for some data or other results to be delievered. The latter is probably the most common, because long running processes like image manipulation are under the app's control and is just a matter of crunching the pixels. Network latency, on the other hand, whether from connection timeouts of slow transfer speeds, isn't something the app can do much about other than wait.

To simulate a slow connection, you can, of course, create additional traffic on your connection while you're testing the app–e.g. playing videos, running big downloads (and uploads), and so forth. The faster your connection, the more you'll have to load it up. You might also be able to change settings in your modem or router to make everything run more slowly. For example, if you have a wireless-N router, change it to run only at wireless-A speeds, then load it up with extra work.

You can also add some debugging code to simulate latency. In JavaScript, for example, wrap your network calls with a setTimeout of 1-2 seconds, or longer, depending on what you want to simulate. You can wrap the initial call itself and also wrap delivery of the results. You could create a small wrapper that would get the results quickly into a separate buffer, but then buffer them as real results within a series of setTimeout calls. Having such a setup that you can co

Whatever the case, slowing down the network speed will help show where and how progress indicators are being used. You'll want to especially look for places where your app just appears to sit without any visual indication of the work it's doing. This reveals that your code is making assumptions about how quickly data is coming back and that you'll need to put up a progress indicator if the user waits for more than 1-2 seconds (the recommended timeout for showing an indicator).

On a slow connection, too, you can better evaluate whether you're overusing progress indicators. That is, while it's nice that we have controls to show work happening, they are something that users will get tired of looking at over and over. So think about how you might architect the app to wholly eliminate the need for those indicators. For example, if you are switching between a page with a list of items and a details page for one item, and there's a delay in that switch, it's a good place to simply switch the visibility of two pages that are fully constructed, rather than doing a real navigation that would tear down and rebuild the pages each time. The user won't be able to tell the difference, except that everything is running much faster. Indeed, building a page with a list control of hundreds or thousands of items is a very expensive process, so avoiding that work is a good thing, especially when the list page itself doesn't change in the process of navigating in and out of details.

On the flip side, also test your app on the fastest connection you can find and see if there are any places where progress indicators show up unnecessarily. This would also reveal places where you're assuming that the data will take a long time to obtain, and where a progress indicator might just flash on the screen momentarily and create visual noise. In those cases you'll want to make sure you again use a timeout before the indicator appears.

 

 


Before submitting your app to the Store, take some time to review your app's manifest and the information and files it references. Mostly concentrate on the Application UI tab in Visual Studio's manifest editor. If you haven't noticed or installed recent Visual Studio updates, the VS team improved the editor by bringing all of the app's UI elements into the Application UI tab. Earlier, bits of UI like the Store logo were on the Packaging tab, which made it easy to miss. Now it's all together, plus, it shows you every scaled version of every graphical asset (there's so much to scroll now you can't get it into one screen shot):

manifesteditor1 manifesteditor2

Here's what to pay special attention to:

  1. Make sure that all logos (logo, wide logo, small logo, and store logo) are all representative of your app. Avoid shipping an app with any of the default graphics from the VS templates. Note that the Store logo never shows up in the running app, so you won't notice it at runtime.
  2. Double-check how you're using the Show Name option along with Short Name and/or Display Name. Go to the Start screen and switch your app tile between square and wide, and see if the tile appears like you want it to. In the graphics above, notice how the app's name is included on the tile images already, so having Show Name set to "All logos" will make the tile look silly (see below). So I'd want to make sure I change that setting, in this case, to "No logos." However, if my square tile, perhaps, did not have the app name in text, then I'd want to set it to "Standard logo only."
    manifesteditor3
  3. If you set Short Name, its text will be used on the tile instead of Display Name.
  4. Be aware that if you're using live tiles, the XML update for a tile can specify whether the display/short name should be shown in the branding attribute. See my post on the Windows 8 Developer Blog, Alive with Activity Part 1, for details.
  5. If you don't have specific scaled assets, reevaluate your choices here. Remember that if you don't provide a specific version for each given pixel density, Windows will take one of the others and stretch or shrink it as needed, meaning that the app might not looks its best.
  6. Examine the relationship between the small logo, your specified tile background color, and any live tile updates and toast notifications you might use. Live tiles and toasts can specify whether to show the small logo, and the tile background color is used in both instances. If you have a mismatch between the small logo edge colors and the background color, you'll see an obvious edge in tiles and toasts.

 


The Windows Store Certification requirements (section 4.5) stipulates that apps doing data transfers must avoido doing large transfers over metered connections (you know, the ones that charge you outrageous amounts of money if you exceed a typically-small transfer limit!). At the same time, getting a device set up on such a network can be time consuming and costly (especially if you exceed the limit).

There is certainly room here for third-party tools to provide a comprehensive solution. At present, within Windows itself, the trick is to use a Wi-Fi connection (not an Ethernet cable), then open up the Settings charm, tap on your network connection near the bottom (see below left, the one that says 'Nuthatch'), then in the Networks pane that appears (below right), right click the wireless connection and select Set As Metered Connection.

network settings  Wifi menu

Although this option doesn't set up data usage properties in a network connection profile, and other things a real metered connection might provide, it will return a networkCostType of fixed which allows you to see how your app responds. YOu can also use the Show estimated data usage menu item (shown above) to watch how much traffic your app generates during its normal operation, and you can reset the counter so that you can take some accurate readings:

Wifi usage

It's also worth noting that Task Manager has a column in its detailed view, on the App history tab, for metered network usage, where you can track your app's activities.

Task manager app history

It's good to know this as a user, too, as you can see if there are any misbehaving apps you might want to reconfigure or uninstall.

 


When using different contracts that broker interaction between two apps, it's a natural question to wonder if your app is really going to work with all other arbitrary apps. This is the intent of contracts in the first place, but it's good to put apps in the Store when you're reasonably confident that they'll work according to that intent.

With sharing data through the Share contract, in particular, it really boils down to finding target apps that can consume the data formats you provide. If you're using only known formats (HTML, text, bitmap, files), then the Share target sample app in the Windows SDK provides a good generic target. When you invoke it through Share, it basically shows you all the data it finds in the shared package, which is very helpful for checking out that the package contains the data you think it should.

You'll then likely want to test with some likely common targets. The built-in Mail app is a good candidate, as is the Twitter app that you can find in the Windows Store.

For custom formats, you probably don't share such data without having some target app in mind, so you'll know who to test with.

Beyond that, the APIs for sharing are actually designed so that you don't need to worry too much about more extensive testing. The only way standard format data gets into a shared data package is through method calls like DataPackage.setBitmap and DataPackage.setUri. That is, rather than having the source app just drop a string or some other object into the package directly, these method calls can validate the data before making it available. This guarantees that the target app will have good data to consume, meaning that the target app can just examine the available formats and use them how it will.


It should go without saying, but it's easy to forget. When testing your app, be sure to exercise various combinations of:

  • View states: fullscreen landscape, filled, snapped, and fullscreen portrait.
  • Screen sizes: from 1366×768 (or smaller, if you have such hardware), to high resolutions like 2560×1440
  • Resolution scaling: 100%, 140%, and 180%

You can test these things on actual hardware, of course, which is why Microsoft set up the app labs as I mentioned before.

Otherwise, get to know the features of the Visual Studio simulator as well as the Device options in Blend.

In the simulator, the two rotation buttons on the right side will help test landscape/portrait, and the resolution scaling control lets you choose different screen sizes at 100% and a few that go to 140% and 180%:

simulator

In Blend, the Device tab (next to Live DOM), gives you options to show the app in all the view states (the buttons next to View) as well as differnet screen sizes and scalings (these are the same options that the simulator provides):

6-6 (Blend resolutions)

 

Make it a habit, then, to regularly run through these different combinations to find layout problems with configurations that some part of your customers will certainly have themselves.



Process lifetime events, like suspend, resume, and being restarted after being terminated by the system from the suspended state, are very important to test with your app. This is because when you're running in a debugger, these events typically don't happen.

Fortunately, Visual Studio provides a menu at debug time to trigger these different events:

Figure 3-8 (PLM controls)

Selecting either of the first two will fire the appropriate events, allowing you to set breakpoints in your handlers to debug them.

The third command, Suspend and shutdown, simulates the case where the app is suspended (as if the user switched to another app), and then the system drops the app from memory due to the needs of other apps.

In the debugger, you'll see your suspend events fired, then the app exits and the debugger stops. More importantly, however, is that when you start the debugger again, you'll see the previousExecutionState flag in the activated handler set to terminated. This gives you an easy way to through the startup code path where you'll be rehydrating the app from whatever session state you saved during suspend.

After testing in the debugger, it's a good idea to test the app under real conditions as well. Suspend and resume are easy enough–just switch to other apps. Be sure to leave the app suspended for different lengths of time if you have any kind of timestamp checking in your resuming handler, e.g. code that determines how long it's been since suspend so that it can refresh itself from online content, etc. If you can, try to leave the app suspended for a day or longer, even a week, to really exercise resuming.

To test terminate/restart in a real world scenario, you'll need to basically run a bunch of other memory-hungry apps to force Windows to dump suspended apps out of memory (large games are good for this). You'll make it easier on yourself if you have less memory in the machine to begin with; in fact, to test this condition you could shut down your machine and pull some physical memory (or use an older, less powerful machine for this purpose).

Alternately, you can use a tool that will purposely eat memory, thus triggering terminations. A good list of tools for this purpose can be found on http://beefchunk.com/documentation/sys-programming/malloc/MallocDebug.html.


To continue from my previous post, here are some additional performance tools to know about.

First is a JavaScript memory analyzer that was just put out a short time ago. Read about it here: http://blogs.msdn.com/b/visualstudio/archive/2013/01/28/javascript-memory-analysis-for-windows-store-apps-in-visual-studio-2012.aspx

Second, here are topics in docs on performance analysis:

Analyzing performance of Windows Store Apps

Timing and Performance APIs (HTML/JS)

Analyzing memory usages in Windows Store Apps (HTML/JS)

Analyzing the code quality of Windows Store apps with Visual Studio code analysis (C#, VB, C++)

 

Finally, there are two posts on the Windows 8 Developer Blog that are relevant:

How to improve performance in your Metro style app

Tackling performance killers: common performance problems with Metro style apps

 


The Windows App Certification Kit or WACK, which is part of the Store Certification process, is something you should be running every day during app development to see if it comes up with any problems.

Typically it warns about startup time and suspend time, almost invariably, but it's good to really watch these conditions because of the automatic deadlines built into Windows. To review, if an app takes more than 15 seconds to create its first window (and thus tear down the default splash screen), and the user switches away, Windows will terminate that app. If the user doesn't switch away, mind you, the app will continue to launch. With suspend, if the app doesn't return from its suspending event handler in five seconds, it will also be terminated.

Under optimal conditions, you typically don't need to worry about these things, but if you've learned anything about testing you'll know that apps seldom run under optimal conditions!

On such non-optimal condition is when there is a lot of disk and/or network activity going on at the same time, such that your app will experience slower than normal response from disk/network APIs. For example, the app might be trying to load file data from its package during startup, or might be trying to make various XmlHttpRequests on suspend. When there's a bunch of activity going on, an operation that takes half a second normally might take much longer and perhaps cause the app to exceed the time limits.

This means two things. First, be sure to test app launch and suspend when other stuff is going on. Try, for instance, to launch your app as soon as you can after booting the system, when there's still other disk activity. Or do some intensive operation on the file system (like run the Disk Cleanup tool) and see what happens when you launch or suspend. Try the same things after you've started a few large downloads and a few uploads, so your app has to compete with those transfers.

Second, implement your app to be resilient to high latency. If in the above tests you find that startup can take a long time, be sure to implement an extended splash screen to give yourself as much time as you need. For suspend, you can first take the approach to write/transfer as much state data to disk before suspend occurs (i.e. when the data changes), thereby minimizing the work that's necessary during the event itself. Also avoid making the assumption that you can do the full load of possible work when suspending. If there's doubt, create a strategy wherein you save the most critical data first, then attempt additional stages of secondary work if there's still time. The suspending event tells you the deadline for completing your work, so you can keep checking elapsed time after each operation to see if you want to start another.

In any case, running your app under high-latency conditions will also show you where you might increase performance and/or show progress indicators where you haven't before, because tests under optimal conditions never showed the need for such.