Tag Archives: May2018

Systems Design for Mobile Test Development

Are you a “software tester” or a “mobile tester?”  If you identify with the latter and let’s face it, most of the testing nowadays is focused on mobile app development, then you need to stop thinking “I’m only a software tester.”  As software testers we’ve long focused on the software only, meaning the functionality of the software and not really considering a systems perspective.  But with mobile apps, depending on the architecture of said mobile app, a full systems integration approach gives a more complete test coverage approach.

In my career, one hard lesson I have learned is how the inter-dependencies affect software behavior.  Those inter-dependencies include hardware limitations, firmware or operating system functionality or a combination.  The most complex type of testing is the combination of hardware and firmware conditions which directly or indirectly affect the app’s behavior.  A decade ago, I discovered the charging of the battery in various states of charge affects the functionality of a native app, a medical device.  Since then, I’ve seen other, commercial, non-proprietary apps which rely on various states of the hardware or operating system but are not tested as combined states.   

Testing states of the device and app are important but the tester also must understand when the functionality is supposed to behave based on the state.  Adding the sequence of when the functionality is supposed to behave and taking into account the conditions or combination of conditions can be complex and quite overwhelming.  Due to the complexity, a smart, well-planned test strategy is vital to give high test coverage for any mobile app.  And that planned out test strategy needs to incorporate systems thinking. 

Now, what do I mean by systems thinking in mobile test development?  The best way to describe systems thinking for mobile tests is through examples.  But first, the tester and/or test manager should consider these questions:  the who, the how, the where and the when of usage of the mobile app on the mobile device.  Mobile testers need to ask these questions when planning out your regression and new testing.  I’ve often spoken about “testing beyond the GUI” but I have not given any specific examples.  Tests should include system boundary tests based on who will use the app, how the app will be used, where the app is used and especially when at any given point of usage.  The sequence of functionality is not seriously addressed by many testers and often times miss uncomfortable states of the app.  

 One of the most commonly missed or misunderstood tests is when an app receives notifications. The testing of which color LED light to be used for the type of notification is accessing the operating system of the device.  The uniqueness of the device itself may be a factor in which color of the LED light is used to notify the user or there may no outwardly visual notification on the device but it might be audible notification.  Mobile testers need to know what the app uses in its development to let the user know of a notification.  What is also important to consider is when that notification occurs based on whether the app is in use or not.  If not in use, do other apps’ notifications step on your app’s notifications or vice versa?   These might be silly, obvious tests to conduct but yet, I’ve seen and tried to access largely used mobile apps where notifications do not work as I would expect them to work.  Sometimes, they are missing altogether due to an interruption from another app, sometimes, I get no visible notification on the device’s LED light but I see a notification at the top of my screen. 

Another example of a system level test to add to your test strategy is performance.  Now, most testers think of performance as speed or “how fast can I connect.”  That is indeed one test along with how responsive the app is to the commands a user inflicts on the app.  But let’s not forget performance also includes load, stress, and endurance to name a few concepts of tests.  So to illustrate, an example of a use of a mobile app while on an airplane along with many other passengers also trying to access the Internet.  Mobile testers need to not only consider the different functions of the app while accessing the Internet but also more than 100 other passengers trying to access Inflight Internet.  Can the app respond well to the commands based on the load in such a context?    Now, has the development team defined the boundaries or limitations regarding the load on a network and how the app will respond?  What if the app’s limit is degrading to the point of being unusable when 100 other users are trying to access the same network?  Even the use of different routers does not seem to help improve the performance speed.  Knowing how the app will be used, where the app will be used, by whom the app will be used and when can help to formulate your test strategy.   

Let’s further discuss the app on an airplane example. Ask yourself, if you are trying to connect, seeing others connected and streaming content and you try to do the same but the plane hits poor weather, kicking everyone off the network. Then all the passengers are all trying to reconnect to the network at the same time, how does your app handle the load?  How does your app handle the interruption?  Does your app give you a notification to inform you the network is again available?  How responsive is the app once reconnected?  Can you, access the content where you left off before losing connectivity?  Sometimes, an app cannot reconnect because it cannot get a unique MAC address ID which means firmware testing of the router the app is trying to connect.   

Mobile apps which rely on the cache to respond quickly to commands may create problems unforeseen by the testers.  Going back to the example of losing connectivity while on a plane, the app stored the location of where the user was while streaming content.  Once connectivity was reestablished, tests involving the state of the app, the state of the device is maintained without errors, meaning the content continues streaming seamlessly.  But reconnection causes the app to restart.  This may involve loss of battery charge where the user is able to recharge the device and how long of an interruption might affect the app’s behavior.  Tests involve what state can the app be in where the cache is utilized or not.  Testers need to know how important cache retention is to the user at any given point of the app’s usage.  

Mobile tests can be incredibly complex.  Mobile testers need to think “system” and not just focus on software testing.  If the planning is done, automated tests can be created but keep in mind whether your tests are a one-time test or repeatable test.  It’s not cost-effective to create automation tests for one-time tests, and sometimes it’s just faster to perform the test manually.  This is why it’s vital to understand the who, the how, the where and the when of your app’s usage. 

Author Bio


Jean Ann has been in Software Testing and Quality Assurance field for almost 2 decades including 10 years as a mobile software tester. She is a globally recognized a mobile testing software specialist, mentor, trainer, writer, and public speaker.  

Jean Ann’s software testing and quality assurance experience cover various industry domains which include avionics, medical diagnostic tools,  surgical tools, healthcare facilities and insurance, law enforcement, publishing, and the film industry. Testing environments include multi-tiered applications in a strict Waterfall software development life cycle to a medical device regulated the environment in a hybrid of Iterative and Agile software development life cycles.  Testing includes client/server applications, websites, databases, and mainframe records. Mobile apps testing on a proprietary device for police use, medical device diagnostic use, and surgical use. Nonproprietary device apps include financial use, business use, and games. Currently helping to build quality in software development projects for in-flight entertainment.  The development environment is a hybrid of the agile approach to a waterfall environment due to being regulated.

Jean Ann is a consistent speaker at various software testing conferences.  She actively mentors, conducts training and workshop sessions, publishes webinars, participates in podcasts, reviews and contributes to several books. Jean Ann is a graduate of St. Anselm College with a Bachelor of Arts degree in Political Science.