Friday, 26 April 2013

Design Considerations When Building Cross Platform Applications: Visual Design and User Experience


In this second post of my blog series about Design Considerations When Building Cross Platform Applications, I’m focusing on what most people typically consider the only design consideration:  Visual Design and User Experience.  Usually when a product company is beginning to plan out a new product, the team goes through phases during which they start to innovate and scrutinize the idea, followed quickly by a design and/or prototype phase.  This period is the starting point because it is really what helps the product managers – and in the long run the customers – visualize the product’s potential and how a user may use it. Some of the factors that are taken into account during this time period are the overall cosmetic design (including product consistency, simplicity and customization) and user interaction flow. While the designer begins to dig into the idea to put some parameters around the product, he must consider the company’s logos and colors as well as its current and future product suite.  It is very important that each application delivered by the product company has a similar look to it so the user feels that consistency across all products in the suite.
In my experience at SuperConnect thus far, this approach rings true.  We began our company with one designer and have been lucky enough to work with several others over our short lifespan.  The result was initially a collection of various design approaches starting with a more web-centric design on a mobile device and evolving to the very clean and concise design we now use in our products.  We see the consistency that a full-time designer and thought leader can provide to a suite of products that are meant to drive enterprise mobility.  The catch, as there is always a catch, is that while our products may reflect the comprehensive and hard work of our team, customers still want to make their mark when deploying applications to the enterprise. To help our customers use SuperConnect products in a way that will accommodate and enhance their own brands, we have taken the approach of the 80/20 rule.  We’ve built strict templates within our applications that guide the customer’s administrators to follow our designer’s ideas and plans, but still provide a bit of customization.  One example lies in ourConnections application where we provide a unique look at the employee directory by allowing the customer to specify what data is displayed in the application.  Our design provides an interface where the administrator can customize which data fields are displayed within different areas of the application based on specific templates that were defined by the designers.  In the end, our design remains intact because we don’t allow the customization of font (color, size or type), the layout of the fields themselves or the locations of images within the app.  However we do allow the administrator to choose which data field should be displayed in the various locations on those templates.

What is behavioral presence?


It’s important to consider the best method for delivering your message to the person you’re communicating with. There’s a right way to utilize enterprise social networking and a wrong way. Knowing how people wish to be contacted is essential for effectively conveying your message and eliciting the response you want.
The vast array of communication tools coupled with personal preferences creates a dilemma about how to communicate most effectively. Systems have started to create simple indicators on availability; tools today can look into your Outlook schedule and gauge your availability based on your calendar. Although these indicators of “red” or “green” do give insight into calendar availability, there is an inherent problem with these system-generated cues. Active Directory service or IM status don’t consider the human element of how people prefer to be contacted.
There is a huge need for system-generated presence, but it doesn’t reveal the whole picture of how people actually want to be contacted. For example, you might be on your normal Monday status call as a listening participant, but not actively contributing content. Many people multitask during conference calls to take care of other pressing business needs. Although your ears are actively engaged in the conference call, your fingers are available to chat or email.
Behavioral presence is the combination of system-generated presence and desired contact preferences. Your message will often dictate the most effective way to connect with someone. In-person meetings, texts, IMs, or emails are all valid mediums to communicate your message. Regardless of which you choose, consider variables like the timing of your message and the communication preferences of the person you’re contacting. Many times the outcome of a conversation is more about how you present your message than the message itself.
Tools today are getting smarter. They can track the most contacted person in your address book and even provide historic trends on the subject you communicate about most. The first step in creating a behavioral presence model is taking system-generated availability and combining it with a platform that gives people choices about how they prefer to be contacted. There needs to be an emphasis not only on the ease of stipulating your preferences, but also on access to these stated preferences. Your tools should be a perfect marriage of systems and human intervention. The flexibility of the tool should support the ever-changing attitudes of your employees and not be dependent on just your calendar. It may never be perfect, but it might eliminate some of those uncomfortable email exchanges with your boss.
Regardless of how many tools or resources you use to keep in touch, people are still people and will communicate the way they prefer to. Learning how to best use today’s tools will not only help you maximize the impact of your message but also help you foster better relationships with the people you’re in contact with each day. When looking at what method of communication to use, you must first determine the types of tools you use from day to day and how you use them. For example, when considering the best way to communicate on both mobile and tablet devices, it’s best to understand how these devices are traditionally utilized. Desktops and laptops are traditionally optimized as production devices, as most individuals produce the majority of digital content using keyboards and 13–15” displays for creating everything from Microsoft Word documents to Microsoft PowerPoint presentations. Though these devices are being increasingly used to view media through services such as Netflix and Hulu, the convergence of TVs and digital displays have kept traditional computers largely in the information production and consumption space.

Friday, 2 November 2012

Integrating Usability Testing into Agile Software Development


Description: Agile Development Life Cycle
The best way to validate design and requirement decisions with your users is via usability testing. The general idea is to gather feedback on early concepts, understand workflows & user pain points and allow designers and product managers to document findings and create/maintain personas, roles and system workflows.
Agile methodology lends itself extremely well for iterative usability testing. In this article I’ll review my own team’s process for how we plan to integrate usability testing into our product development cycle.
Test Subjects
As a general rule, the best test subjects are real product users, but it is better to test anyonerather than not to test at all. We’ve created 3 tiers of participants:
·         Tier 1: Customers/Partners – Ideally, we only test real users from our customers and partners in the field. We are establishing a charter customer program to make real users readily available for such testing.
·         Tier 2: Recruited Participants – When needed, we may supplement our testing with recruited participants – either paid, gifted or volunteer.
·         Tier 3: Internal Employees – As a last resort if no other participants are available, we’ll test our ideas on Sales, Marketing, Business Development, and/or any other customer-facing groups. This approach still provides valuable feedback and validation, but we need to keep in mind that results may be slightly skewed.
Planning, Preparation and Facilitation
To prepare for the test, our design team collaborates closely with product management to define the usability tasks to test and the interview questions concerning value. We try to concentrate on the key tasks that users will do most of the time.
How we test users is determined by multiple factors, including what type of test environments are needed (i.e., Web v. Mobile), as well as when and where we are testing. Over the years I’ve tested in-house, remotely (using a video conferencing tool like Adobe ConnectJoinMe orSkype) and on-site at a customer’s office. The latter is usually the best case since you can get a feel for the user’s actual workspace limitations, workflows and peer interactions – but it is not always possible, so take what you can get.
We also consult with our product team to decide the most efficient approach, based on the available time and resources needed to:
·         Create a wireframe or build an interactive prototype
·         Present working code in a test environment
·         Use card sorting, paper prototypes and/or taxonomy studies.
When conducting the test, it’s important to make sure to provide the user with a proper orientation to the environment and ask for permission to record. It also helps to mention that that they won’t be hurting your feelings by giving an honest opinion, and to continually remind them to “think aloud” to provide feedback and context. Watch what they do, and observe both verbal and non-verbal clues about why they fail or become confused.
The testing environment and equipment (laptop, phone, tablet, software) may vary, depending on the product type (desktop v. mobile), but should be a quiet, closed space that can minimally accommodate a tester, a facilitator and a note taker. Recording audio and video is highly recommended; using cameras to capture both screen and the user’s face provides emotion and context to any issues that arise. A laptop equipped with a webcam and screen recording software is ideal — I typically use Silverback on a MacBook Pro for this. Remember that face-to-face testing is ideal, but any testing is better than no testing.
When to Test
Our recommendation is to test users early and often, during all phases of the product lifecycle. This includes:
Conceptual/Prototyping Stages – This happens earlier in the project, during the requirements/planning stage (iteration -1, no code or design specifications). Try to recruit 8-10 testers to participate in 45-60 minute individual sessions covering multiple features or a new application. It’s critical to get input from existing customers for validation and understanding real workflows via wireframes and/or an interactive prototype. This type of testing usually requires a detailed script and covers a new product or multiple user stories. In the past, I’ve successfully used FlashHTML, Axure and PowerPoint to create mid-to-high fidelity interactive prototypes. I’ve found that the choice of tool is less important than its ability to simulate an experience that mimics the desired end product, or that it can be delivered within the chosen test platform – and you absolutely must be able to work efficiently with the tool.
Development Stages – During product development, testing is done during each sprint. Note that the duration of the sprint isn’t important, as long as at least one round of testing is conducted per sprint. Usability testing is much lighter and informal during this stage, so try to recruit about 5 participants for 15-30 minute individual sessions that focus on a specific feature. You’ll be able to quickly spot trends and patterns in usage, which will allow you to iterate on your design during the sprint, if needed. Tests can either use live code (from a QA environment) or a quick wireframe/mockup if testing items for a future sprint. Allocate a set amount of hours for usability testing activity in the sprint backlog – to be burned down – making sure that all UT activities (planning, testing and analysis) fit within your time estimate.
Description: Usability Testing Process Graphic
After testing is completed, we generate a Usability Observation List that will be shared with the team (via verbal review at scrum, and entered into our wiki within JIRA). Product Management will prioritize these results into one of two categories: Bugs or Enhancement Requests. Bugs are entered into the tracking system (we use JIRA, but TFS, or even something as simple as Excelworks as well) and should be fixed before the end of the sprint. See figure 1 for a visual representation of our process.
Post Launch – After your product is launched, be sure to have an internal retrospect with the product/feature team to discuss what was successful versus what was a problem and consider how to streamline and improve your UT process. In the past, I’ve used surveys, email discussions, message boards and field-testing to gather feedback from end-users. It’s also helpful to have the product team speak directly with external product champions about adoption rates and to ensure reference-ability for marketing and future UT sessions. Lastly, but most importantly, reach out to your charter customers to follow up on the end results now that they are using features in their daily workflows.
Final Deliverables
Our deliverables vary depending on what type of testing has been done. For conceptual testing, deliverables typically include a Summary & Recommendations document that may include a deep dive to identify what the core root causes were for failures based on actual observations, conversations and concrete recommendations to improve the user experience. Recommendations are categorized into UrgentImportant or Nice-to-Have to help the product team accurately prioritize, and the overall scores, statistics and notes are presented.  This document is uploaded into the wiki, along with any audio/video recordings to be archived for reference. If you have the resources to convert the recordings into a transcript, it can be quite helpful for easily searching and quickly scanning.
Deliverables during sprint testing usually include the Usability Observation List that quickly identifies the points of failure and provides recommendations to improve the user experience. These findings are communicated in the daily scrum and uploaded to the wiki for future reference (along with any audio/video recordings).
Finally, the post-launch deliverables can include a retrospect meeting, updating the feature enhancement list based on customer feedback, and identifying reference-able customers. This last deliverable can lead to highlighting a customer in a case study, or inviting interested customers to participate in future UT sessions.
Conclusion
As you can see, user testing is an invaluable part of the agile methodology that can help you better understand your customer’s needs and make your products and apps more useful to users. No matter how or when you test, you’ll reap many benefits such as early problem detection, increased user satisfaction, reduced support costs and increased efficiency for users. And you’ll constantly be amazed by the innovative ideas for enhancements and new features that come directly from your customers.
The bottom line is that if you are developing in agile, then you should be incorporating some form of usability testing into your iterative process. It does not need to be a formal or expensive process – it just needs to capture user feedback in some way so that you can ensure a usable product. As Admiral Grace Hopper once said, “One accurate measurement is worth more than a thousand expert opinions.”