moOz

moOz

moOz

Find someone to have fun with

Back to the case studies

Introduction

We are living in the age of the digital revolution. Our life is full of gadgets that constantly turn our attention away when we are idle. We tap our smartphones all the time for various purposes such as messaging, browsing on social media, or when we look for someone to ask out on a date. There are plenty of benefits of these technologies, however they can cause deterioration in our social life if we are hooked on them – which is not that difficult given their nature.
Let’s face it: Nowadays it is more difficult to establish social relationships offline – especially over a curtain age.

Mooz is a mobile application which was mainly designed to help people start a conversation with strangers at places such as pubs, nightclubs, or libraries.

I worked on the project at an agency called Pocket Solutions in Budapest.

The team that delivered the project was assembled by the fallowing competences:

  • UX Designers (4 people, me included)
  • Mobile developers (4 people)
  • Mooz’s marketing manager and CEO (They were fully involved)

Initial steps

When our client turned to us with the idea, they wanted to validate their assumptions first.

Their main hypothesis was that many people often feel awkward when they try to talk to strangers.

On the another hand, applications like Tinder or Happn seem to be ineffective when we are standing at the bar and we see someone who we want to approach. The owners of Mooz dreamed to implement a real time match making experience, where people could check into places and see other people’s profiles who are in that same place too.

Our design team was a great advocate of Design Sprint by Jake Knapp which we introduced to Mooz’s stakeholders as a highly organised way of idea validation. They took our advice.

Design sprint

At the beginning of that week we assigned the decision maker’s role to the CEO of Mooz. He had the right to make changes on any part of the concept at any time. One of my colleagues was the sprint facilitator, and the rest of us as were contributing as experts of our own areas.

We thought that the most important user journey to test would be when someone goes to a place, checks in, and starts to react on other people’s profiles.

Compared to Tinder, in Mooz you must be at the physical location to be able to check in, and only after that can you see others’ profiles. You can then express your interest towards someone by marking them by a “like” or “friend” button. The core distinction between the two is that “friending” is more about when you want to grab a few beers with someone as opposed to the “like” gesture which means you are into the other person. The rest is classic match making. If both of you “like” or “friend” each other back, the two of you can talk via chat.

During the sprint we managed to be highly prolific. The dot voting system seemed to be very effective when we were sorting our drafts, and we could easily stick ourselves to those timeframes that were defined in the Sprint’s guide.

By the end of that week we had created a fully functional prototype of the above mentioned user journey, and we luckily could carry out our usability tests at the Budapest Party Boat. We got a lot of overwhelmingly favourable feedback on the concept.

Key findings

We conducted eight usability tests. We invited people to test our prototype as an exchange for a drink. The interviews hadn’t been done by one person, we were rotating the interviewer’s role among the members of our UX team.

When we assessed our test results, our findings were the following:

  • If someone doesn’t check into a place, they will not be able to see other’s profiles. If they are not close enough to a place, they can not check in. So they must be there to be able to see other’s profiles, which made people feel safe.
  • Our interviewees thought that it would be more fun to meet with someone right after they matched than stay online.
  • Mooz is not only a dating app. People can make any kind of relationships by it.
  • People felt worried about that when the app is released not many people would use it until it reaches a critical mass, therefore people would see lots of empty places when they checked in, that can results in massive uninstalls. However Mooz’s marketing team was familiar with this issue and they worked on a strategy to handle user retention.

MVP

By the end of the Sprint we had enough data from our qualitative research to set up the limits of a minimum viable product – Mooz’s CEO decided not to shoot the project down due to the Sprint’s success.

We created a process map to represent curtain flows and transitions between screens.

It was far more detailed and comprehensive than the journey map we created throughout the Sprint. When we have finished with that we prioritised the newly added features. When our design team and Mooz’s stakeholders both agreed on what the MVP should be we sketched down our ideas.

Paper Prototyping

We drew down every possible screen variations on paper. We had taken photos of our sketches, and created a paper prototype in InVision to test our concept a bit more. It was rapid, lightweight, and effective.

Wireframing

When we had validated our drafts, I created the digital wireframes out of them in Sketch.
I made a clickable low fidelity prototype according to our blueprint in a few days. Not so far from that time I left the company, so I couldn’t work on the hi-fi version of the interface. However my ex-colleagues did an excellent job making a vibrant and fully branded interface for the application.

Release

The first time I realised that Mooz had been released was when I was walking down to the underground, and I saw an advertisement about it. I felt so good that the app we had been working on for such a long time was finally released. I was so proud that I eveb took a few pictures of it.

Render Node Monitor

Render Node Monitor

Render Node Monitor

Render farm management tool

Back to the case studies

Computer-Generated Imagery (CGI) is a well known method. Its history goes hand in hand with the history of the computer itself. Nowadays there are no movies made without it in Hollywood, and it has a huge impact on other industries as well such as architecture, product design, or advertisement.

We call the process rendering, while an automatic process is generating an image from a 2D or 3D model.

Rendering a single image by a ”strong” computer could take several minutes. Rendering is a computing-based task. You can imagine how many hours it would take to render an hour long movie that consists of 24 images per second by one computer if the average duration of rendering an image is 15 minutes. It would take exactly 1296000 minutes, or 21600 hours or 900 days or 2,466 years.

However to reduce the duration of this time there is a technique called distributed rendering that allows us to do any rendering job on multiple machines simultaneously. By this method the machines are sharing their available hardware resources that results in faster production. In professional terminology these types of machines are called ”render farms”.

 Companies which have their own render farms often face difficulties managing them. This is how the idea of Render Node Monitor came about.

RNM is a render farm management system, which displays each rendering process in terms of machine performance, threats, warnings, issues or task completion on a web-based dashboard. It can provide remote access to the rendering machines, statistical data about the farm and so forth.

User Experience Research

Journey, and screen mapping

How the project started?

The target user base of the product was small and medium size companies and freelance enterprises, for two reasons. Large companies such as Pixar Studios or Industrial Light and Magic can afford to develop their own softwares in-house.
Besides, we found of the lack of any affordable solutions for small companies.

At the initial state of the project we had numerous interviews with professional CGI studios and freelancers about their workflow.

What difficulties do they deal with?

We found that our interviewees are often deal with difficulties tracking down who is using which rendering machines and for how long. They also find it difficult whenever a render job fails and it takes a while for them to find out what happened. Getting notified when a render is done, sending commands to rendering applications or a machine such as go to sleep when the job is done, or access remotely to any machine because sometimes they are simply not plugged to a monitor also seemed to be important to them.

Sorting out and prioritizing

We got numerous great ideas from our interviewees. Our budget was limited so we had to prioritize the feature requests and only keep those in the concept which seemed to be useful for most of them.

User journey and screen maps

We found that the most important journey users make is the way they add machines and applications to the dashboard. It was extremely important to them to know where they can find the ‘agent application’, that needs to be installed on those machines that they want to control through the RNM dashboard. We designed a multi-step onboarding system which is available from the first login on that shows every necessary step to initialize a machine and an application. Unfortunately we could not implement the onboarding as detailed as we designed originally because development costs were extended the budget, but we created a lighter version of this.

Sketching and wireframing

Rapid and quick ideation on paper

Sketching

When we made up the detailed concept of the application, our team made sketches about the interface. We were sketched in several variations of each screen, after which our team carried out polls on which solution should be used. Sometimes we were merging each other’s sketches in order to get the best results.

Wireframing

When our sketches got digitalized as wireframes I was about to prepare a fully functional prototype for user tests in Axure RP. At that point it turned out that we hadn’t made up the subscription model yet, and I took personal responsibility for the design afterwards – and I take great pride in this job I finally completed.

Interface Design

Design deliverables

Registration / Login

The registration and login forms are presented in the official first look. (The unofficial one is the on landing page, which is NOT my work.) I wanted to make it similar to Google layouts. Huge white spaces which remind us of paper forms, easy-to-read fonts, and clear texts.

Dashboard

The dashboard was designed for full HD displays. I found it important to note that RNM has not been designed for a responsive layout but desktop only. The dashboard can be personalized by its users. They can rearrange the machine groups by dragging and dropping, and they are also able to do the same by the machines inside the groups.

Groups can be open and closed whether they are in use or not. Also there is a smart feature called paint selection which allows us to select multiple machines or applications by a simple drag.

Payment

At the beginning we planned to power our subscription system by the Braintree payment gateway. As time passed we changed our minds and we decided to use FastSpring because they are specialised in selling software licences through their platform and we found their transaction feeses more affordable. While we were designing the subscription model we had to take into account several issues, e.g. that our service offers both prepaid yearly and monthly options, and we are charging from the number of machines the network consisted of, but we also have to tackle issues upgrading or downgrading the number of machines in subscriptions which will alter the fees we charge. It was challenging but I enjoyed every bit of the work.

Settings

The settings menu was the most complex interface users interact with, therefore we intended to design it extremely user-friendly.

Our main goal was to reduce the feeling of complexity regards navigation. We thought since people nowadays are getting used to the infinite scroll effects through social media, we should arrange it similarly. So we designed a scrollable menu structure where each sub menu is a tile that is using a toggle mechanism to fragment its content. We recieved a lot of positive feedback on it through tests.

History log and Statistics

We recieved requests by users to provide statistical data bases about the farm usage and also to display error logs in case of render fails, or in case disfunctions in the process of rendering. RNM now supports several different rendering applications and it can collect and display their error logs, as well as basic information about the rendering machines.