Fetch Robotics CEO Melonee Wise Answered Questions from the Internet. Here are the Highlights

Image courtesy of Fetch Robotics

Fetch Robotics CEO Melonee Wise did a Quora Q&A session this week, in which Quora users of all levels of engineering knowledge submitted questions on topics ranging from battlebots to career advice. We kept tabs on the Q&A to take down some of the most interesting highlights.

According to an interview with Robotics Business Review, Wise has always had a passion for robotics, going back to age 8, when she built a Lego pen plotter. After earning her mechanical engineering degree from University of Illinois at Urbana-Champaign, Wise interned at Alcoa, Daimler-Chrysler, and Honeywell Aerospace. In 2012, she took the position of manager of robot development at Willow Garage, developers of ROS.

Wise is a hardcore mechanical engineer at heart and is also the public face of Fetch Robotics, so it’s interesting to see how she answered these questions. We’ve compiled the highlights from the Q&A session below.


Q: I am a teenager who has an interest in robotics. The highest level of math I have taken is algebra II. How would I start learning about making robots and programming them?

MW: Let me start by saying, don’t worry about how much math you know. That will come with time and many basic programs for robots require no more than basic math skills.

I would say the first step to getting started is learning a scripting language like python. There are several advantages to learning/using python the largest being that you don’t need a lot of extra knowledge (like compilers, data types, pointers, etc) to get started writing simple scripts.

Then I would say learn some basic Linux, purchase a cheap robot that has ROS drivers (robots.ros.org) (like a sphero[1] , kobuki, turtlebot3, etc.), and start programming. Nothing is more motivating than watching a robot drive around and programming is the biggest skill you need to become a roboticist.

It can also be helpful to find a friend to learn with and create new robots with. For me that was my friend Derek, we met in college our freshman year and started building robots together. We’re still friends, we ended up working at Willow Garage together and eventually started Fetch Robotics together.


Q: What are the elements that go into creating successful BattleBots?

MW: I would say this depends on what your definition of successful is. People tend to build BattleBots for different reasons.

Wise and colleagues at Robogames 2015. Wise is holding her 3D printed battlebot, Phoenix Rooster.

If your focus is on winning BattleBot matches then I think that the RioBotz team has, by far, the most comprehensive tutorial for building all the basic styles of BattleBots. Their ComBot Tutorial book is extensive and many people consider it to be “The BattleBot Guide”.

If your focus is “flair” or a cool BattleBot then I think the best way to be successful at that is to choose a way to limit yourself to force creative design. For example, the last couple BattleBots I designed were about “flair” and I limited myself to sheet metal and 3D printed driven designs.

If you’re looking for inspiration you should check out the Weaponized Plastic Fighting League[1] [2][3] (WPFL) videos from the BattleBot competitions we hosted at Fetch Robotics.


Q: What are the skills required to become a robotics expert with CS engineering degree? Which is the most important programming language used in robotics today?

MW: To become a robotics expert it takes time and spending a lot of time using and programming robots. In general, some of the best roboticist that I know have a degree (B.S. or M.S.) or extensive experience in two of the following:

  • Computer Science
  • Electrical or Computer Engineering
  • Mechanical Engineering

This, of course, is not a hard requirement but that is my experience in general. However, all of that education isn’t worth much unless you’ve had significant hands-on experience with robots. You’ll find out very quickly that robots have their quirks/challenges and that running code in simulation is typically nothing like the real world, largely because much of robotics is probabilistic and sensor data is noisy. At Fetch Robotics we prioritize hiring individuals with significant hands-on robot experience over individuals with limited experience.

In terms of programming languages, most of the robotics companies, research institutions, and projects today are built on top of ROS. The main languages supported in ROS are C++ and Python.

I would say that most on robot applications are written in C++ in a Linux based environments largely for stability and runtime CPU optimization. Off robot applications are more diverse and tend to use a mixture of languages depending on the application (e.g. JavaScript for web-based applications, python for state machine applications, etc.)


Q: What are the biggest challenges in computer vision for robots, especially in the industries of autonomous vehicles and service bots?

MW: I think today the biggest challenges in computer vision today are generalization and introspection.

We have created many data sets to help train algorithms to perform/solve specific tasks/problems. However, many of those algorithms don’t generalize across broader data sets nor do they always solve the problems that we are hoping to solve.

Let’s look at the case of introspection, an interesting example is detecting a dogs or wolves. An algorithm was created to detect whether a picture was of a husky or a wolf yet the data set used to train the algorithm actually ended up learning to detect the presence of snow because all the picture of wolves in the training set had snow in the background.

Many researchers are now starting to look at how create tools for algorithms to “explain” the decision-making process. In the case of the husky vs wolf, the algorithm output the part of the image that it used to decide whether it was a wolf or husky, as shown below.


In the case of generalization, we have large data sets like imagenet (15M labeled images) however all these images were taken by people and have many advantages over ones that would be taken by machines in the case of robots and autonomous cars.

For example, when people take pictures they typically take them in good lighting conditions and center the subject within the image, as shown below. Unfortunately, robots, autonomous cars, and other machines capture data (pictures, laser scans, etc) and then try to find the subject not even knowing if the subject is in the captured data or in the case of an image if the lights are on.


Many researchers are now looking at ways (such as GANs) to generate larger and broader data sets that are more representative of data sets that might be gathered by machines vs. humans.

Overall, I would say we have a long way to go and we have only barely started to scratch the surface in solving these problems.


Click here to check out the full Quora Session.