With my wonderful collaborators Cynthia Matuszek, Hadas Kress-Gazit and Nakul Gopalan, we published a review paper about language and robots. This article surveys the use of natural language in robotics from a robotics point of view. To use human language, robots must map words to aspects of the physical world, mediated by the robot’s sensors and actuators. This problem differs from other natural language processing domains due to the need to ground the language to noisy percepts and physical actions. We describe central aspects of language use by robots, including understanding natural language requests, using language to drive learning about the physical world, and engaging in collaborative dialogue with a human partner. We describe common approaches, roughly divided into learning methods, logic-based methods, and methods that focus on questions of human–robot inter-action. Finally, we describe several application domains for language-using robots.
It was a wonderful opportunity to reflect on the past 10 years since my Ph.D. thesis in 2010 and survey the amazing progress we have seen in robots and language.
优途加速
MusingsStefanie Tellex
Many of us are preparing to virtualize our classes in response to the COVID-19 outbreak that is spreading in the United States. I wanted to share my experiences teaching my Introduction to Robotics class with video lectures last semester. My motivation for doing online lectures last fall had nothing to do with the COVID-19 outbreak. First, I felt that the time I spent preparing for lecture each class had to be spent again each year. Although I reused slides, I still needed to refresh myself on the slides before each lecture and prepare mentally for what I was going to say and when I was going to say it. Second, it seemed that blasting out a lecture at a group of students who are mostly passively was not the best way to use my face time with students. I would of course ask the group questions, and mix in pair-wise exercises where the students talk with a partner. However most of the time was me blasting technical material at them, and I wanted more informal small group interactions in the class period. Third, as we expand our drone program, it seemed useful to expand access to the lecture materials beyond the students who are on Brown’s campus.
My starting point was a curriculum, syllabus, and set of lecture slides that I developed for the course over the past two years. I previously helped out a bit with the Udacity Flying Car Nanondegree, and one of the key lessons from that experience was to focus on small blocks of content. Based on the Sheriden center’s advice, I started out each lecture by defining learning objectives. The verbs in this page were a helpful starting point. I decided to use the edge.edx.org platform to host the online lecture content because they take care of the hosting and it has a framework for hosting videos as well as asking auto-graded questions, and Youtube to host the videos themselves. Defining learning objectives already put a lot more structure into my slides than I had before. (Yes I know I should have been doing this all along!)
We experimented with more sophisticated types of questions, such as creating a drag-and-drop question for the different blocks of a PID controller, but ultimately stuck with multiple choice questions. More complicated questions ended up being more work for incremental benefit. We also considered having coding questions for example in some kind of Jupyter notebook. However we ultimately decided that the best way to assess coding was to use our existing projects and assignments, where students program on their own drone, on their own time through structured assignments and projects. This simplified the creation of online lectures since they only needed to assess content knowledge rather than the ability to write programs or derive equations.
口袋乾坤官网 - 官方网站|首页:wifiin INE (Intelligent Network Extension)模块是北京初联科技有限公司在2021年推出的智能网络扩展模块。 可提供给伋业客户APP添加定向流量、智能网络控制、流量分析、大数据分析、网络加速等网络扩展功能和提供用户的精准行为数据分析, 完善用户画像。
Once I had a process set up it took me two to three hours to record content for the 80 minute lecture from my previously existing slides. The first block of time I would use to define learning objectives (which I probably should have been doing before, but wasn’t), as well as create questions. The second block of time I used to record. I then uploaded the videos to Youtube. I used the built-in editing tool to crop the beginning and the end when I was starting and stopping the screen grabber. Then I put them in EdX and entered the questions after each video.
During class, students could choose to build their drone, work on assignments, or watch the lectures. Students would do some of each. I used the class time to answer individual questions, help students debug their drones, and perform checkoffs. I felt that I got to know the individual students better with this in-class style than when I lectured to them, and they got more individual interactions from me. Effectively I used the class time as extended office hours. Of course this sort of interaction is hard to make happen now, remotely, but hopefully this experience will help those who are transitioning their classes online!
优途加速
PublicationStefanie Tellex
When a person visits a strange city, they can follow directions by finding landmarks in a map. For example, the can follow instructions like “Go to the Wegmans supermarket” by finding the supermarket in their map of the city.
Autonomous systems on robots need the ability to understand natural language instructions referring to landmarks as well . Crucially, the robot’s model for understanding the user’s instructions should be flexible to new environments. If the robot has never seen “the Wegmans supermarket” before but it is provided with a map of the environment that marks these locations, it should be able to follow these sorts of instructions. Unfortunately, existing approaches need not just a map of the environment, but also a corpus of natural language commands collected in that environment.
Our recent paper accepted to ICRA 2023, Grounding Language to Landmarks in Arbitrary Outdoor Environments, addresses this problem by enabling robots to use maps to interpret natural language commands. Our approach uses a novel domain-flexible language and planning model that enables untrained users to give landmark-based natural language instructions to a robot. We use Open Street Maps (OSM) to identify landmarks in the robot’s environment, and leverage semantic information encoded in OSM’s representation of landmarks (such as type of business and/or street address) to assess similarity between the user’s natural language expression (for example, “medicine store”) and landmarks in the robot’s environment (CVS, medical research lab). You can download the collected natural language data on the h2r GitHub repository: http://github.com/h2r/Language-to-Landmarks-Data
We tested our model with a Skydio R1 drone in simulation, then outdoors. Here’s the video!
How to Read a Paper
MusingsStefanie Tellex
The only way to really understand a paper is to implement that paper. Obviously we don’t have time or resources to do that for most of the papers we read. So most of the papers we read, we won’t really understand. But that’s okay!
Once you understand what problem the paper is solving, it is time to decide if it is worth digging deeper. Why is this paper important to your research? There are several common use cases. If the paper does not match any of these, it is okay to decide not to read further! You can’t possibly even skim every paper, much less implement them all so you are always trying to decide how much resources to invest in trying to understand them.
First, you could be interested in using this paper as a subcomponent of your own research system. For example, if your research is about making robots understand natural language commands, and this paper is about making robots pick up objects, you may be interested in using the ideas, algorithms and/or code presented in the paper as part of the language understanding system. If this is the case, you want to understand the inputs and outputs to the system and how it will connect to your planned system. You may also want to understand how well the system works; if you are aiming to produce a user study or collect a dataset, you need more reliability than if you are aiming to evaluate in simulation and only use this system for the video.
A second reason is that the paper may be solving a similar problem to the one you are solving, or even the same problem. This situation occurs frequently in NLP when many people work on the same corpus and report results even using the same training/testing split. For example, 登入境外网络加速软件 has been used by many papers to evaluate semantic parsing on a common task, asking questions about a geographic database. If this is why you are reading the paper, then it is important to understand at a technical level how their approach to solving the problem differs from yours. It is also important to characterize the performance of your work relative to the other paper. Do you outperform them because you are using a more advanced algorithm? Or leveraging more training data? Maybe you perform similarly to them, but you are using less training data or less annotation, or perform better on an interesting subset of the dataset. Ultimately you want to make a determination of whether this paper can be cited in related work as solving a different problem, or if it is solving the same problem and therefore needs to be compared as a baseline to your approach.
A third reason to read the paper is that it may be solving a different problem that is only distantly related to your problem, but using a particular technical tool or approach that you would like to apply to your problem. For example, you might wish to apply a technique that was translating from English to French to translate between English and the symbolic language your robot is using. Then you want to pay attention to the similarities and differences between your problem and the problem this paper is solving. What changes will you need to make to your input to use the technique described in this paper? Do you expect it to work as well? Better? Worse? How long will it take to train or implement?
A final reason to read the paper is to understand open problems in the field, and figure out what problem you would like to work on. This is often the situation when you are starting out in research. There is sort of a chicken-and-egg problem here in that you don’t know what problem to work on so you read the paper, but you will be more effective at reading papers when you know what problem you are working on. This feeling is okay and normal. In this situation, the most important thing to pay attention to is the problem the paper is solving. Ask yourself if it is an interesting problem. What are the important problems in your field? Is this paper describing one of them? Why or why not?
Any of these reasons may lead to your decision to implement the paper (even if an implementation already exists!). If you are using the paper as a subcomponent of your system, you may wish to own the implementation to make sure it works the way you expect and manage dependencies, although more frequently in this situation you may find yourself using the author’s implementation (when it exists!) If you are comparing to this paper as a baseline, sometimes you will implement the baseline algorithm in order to make sure you understand it. It is important, when implementing, to first reproduce the published numbers on the published dataset before trying it on your new dataset. If you are using an established dataset, many times people will not even run the algorithm on that dataset but use the previously published numbers. If you are using the method as a subcomponent of yours, you will frequently find yourself implementing it yourself in order to deeply understand the method and integrate it with your system. If you are not sure what to work on, implementing an existing seminal paper is a great way to get started. It helps you understand the core tools if your field, and often leads to a new paper of your own when you fix a problem, identify a new use case, or define a new, related problem.
Robot Writing and Drawing
PublicationStefanie Tellex
Writing is a form of human linguistic expression, and Atsu Kotani has investigate this by creating a system to enable a robot to write. Atsu’s work introduces an approach which enables manipulator robots to write handwrit ten characters or line drawings. Given an image of just-drawn handwritten characters, the robot infers a plan to replicate the image with a writing utensil, and then reproduces the image. Our approach draws each target stroke in one continuous drawing motion and does not rely on handcrafted rules or on predefined paths of characters. Instead, it learns to write from a dataset of demonstrations. We evaluate our approach in both simulation and on two real robots. Our model can draw handwritten characters in a variety of languages which are disjoint from the training set, such as Greek, Tamil, or Hindi, and also reproduce any stroke-based drawing from an image of the drawing. This work was featured in 登入境外网络加速软件,and you can learn more from 登入境外网络加速软件, which was published at ICRA last summer
Advanced Autonomy on a Low-Cost Drone
登入境外网络加速软件Stefanie Tellex
We created a hardware and software framework for an autonomous aerial robot, in which all software for autonomy can run onboard the drone, implemented in Python. We present an Unscented Kalman Filter (UKF) for accurate state estimation. Next, we present an implementation of Monte Carlo (MC) Localization and FastSLAM for Simultaneous Localization and Mapping (SLAM). The performance of UKF, localization, and SLAM is tested and compared to ground truth, provided by a motion-capture system. Our autonomous educational framework runs quickly and accurately on a Raspberry Pi in Python. The base station machine only needs to have a web browser and does not need to have ROS installed, making it easy to use in school settings.
In partnership with the Duckietown Foundation, we are releasing our course materials for anyone to use. The Operations Manual contains a description of the drone and how to use it. The 登入境外网络加速软件 book contains all our projects and assignments. This work is being presented at IROS this week and has been nominated for a best paper award! Read the full paper here.
优途加速
登入境外网络加速软件Stefanie Tellex
This post describes my outline/structure for a technical paper. I have found a fairly specific structure works best for abstracts and introductions. For the other sections, I describe a process for writing them.
优途加速
The first sentence should be a general statement of the problem’s importance. The second sentence should summarize the shortcomings of previous work. It’s very tempting to make this go on for more than one sentence, but don’t give in – just one sentence! You will have more space later. The next few sentences should describe your technical solution to the problem. The last two or three sentences describe the evaluation. Usually one sentence about the quantitative evaluation, and one sentence about the robot demonstration. Did it work? How do we know that it worked? This does not need to describe the whole evaluation, but should cover the most impressive thing the robot can do as a result of the paper. I think this abstract is a good example of following this template:
For introductions, the pattern is similar. The first paragraph should generally describe the problem, focusing on the problem, what it is, and why it is important. The second paragraph should succinctly describe related approaches and why they do not solve the problem. The next few paragraphs should describe our technical approach for solving the problem. The last paragraph should describe the evaluation and how we show that we have solved it. I like to have a figure on the first page of the paper (Figure 1) that summarizes or shows what the robot can do that it couldn’t do before. This figure is often the first thing reviewers see, and should show the “impressive result” of the paper visually, often a storyboard of the video.
优途加速
The aim of a related work section is to situate your contribution with respect to other people’s work. You want to give the reader an overview of papers that are solving similar problems or using similar methods. You also want to tell the reader why your contribution is new, different, and better.
Before starting, always always always use natbib. Even if the conference template claims not to support natbib, you can engineer it in anyway, and you should. Natbib contains \citet which is a “noun phrase” citation such as “Tellex et al. (2023) wrote that”. It will do “et al.” with no period after et, and a period after al. It will always spell the author’s name right if your bibtex entry is right. Natbib saves time, makes your paper have fewer bugs, and avoids annoying people by spelling their name wrong (as long as you are careful to get it right once in the bibtex entry). I always start by writing two sentences about each paper. The first sentence is a one sentence summary of the paper. Example: “\citet{tellex11} showed a framework that learns to map between language and robot trajectories functions, enabling a robotic forklift to understand natural language commands about movement and manipulation.” The second sentence is a statement of the differences between their paper and your paper. For example, “Our approach is different because it requires less annotated data and maps to reward functions instead of trajectories, enabling the robot to interpret human goals.” I start by writing a long list of these sentences, and collect them in the related work section of the paper. It’s okay if they are not organized. Every time someone tells me about a paper I should put in related work, I add one of these sentence pairs (or at least a note to go look up that paper). Your goal is to not miss any areas of related work that a reviewer might be familiar with. As you build up longer lists of citations and sentences in this format, you can start to group them into areas. This might already happen naturally as you follow chains of related citations. At some point, you can explicitly start grouping the sentences together into paragraphs. It will still be choppy, but at least related papers will be together. Once you have paragraphs, you can start to write summary sentences. “There are many papers that learn to map between human language and robot trajectories~\citep{tellex11, matuszekxx, …}. Our approach is different because it requires less annotated data than these methods and maps to reward functions instead.” At this point you have a viable related work section.
Depending on your available space, you may find yourself needing more summary and analysis sections that group papers together. For example, you may not have space to give a one sentence overview of each paper. Then you can cut the detailed versions, and just keep the high-level summary. However, I have found that the best way to create the summary version is to start with the detailed version and then generalize. If you do find yourself cutting the detailed versions, make sure to comment it out, or copy it into the draft journal paper submission for later. Sometimes, there will be a very closely related paper that needs a whole paragraph on its own. For example, in our OO-POMCP paper, we built closely on the POMCP algorithm, and needed to give a detailed overview of the whole algorithm. Or there might be a particularly close piece of related work with a subtle distinction, and you need to give more detail to express this distinction. Feel free to write long if this is the case. Sometimes you can figure out a way to cut later, and sometimes the detail is necessary.
Related work is crucial for reviewers, who are often already familiar with the field, and don’t want to have to read your whole paper in order to understand your contribution. Writing related work often helps tease out the novel aspects of the paper that should be emphasized in the introduction and abstract. Don’t save it until the end.
Think of the paper as a tree of problems and solutions. It is important to not give any technical material with out first explaining the motivation for that material. Technical/mathematical material is a solution to some problem, and it is extremely difficult to understand without first being clear about the problem it is solving. First is the overarching problem the paper is solving, which you give in the abstract and introduction. Then in technical approach you break the problem down into sub-problems. Each of these sub-problems has a solution, which itself can be a set of sub-sub problems paired with technical solutions. At any point, the reader may decide to stop descending delving into the leaf nodes and go on to the next sub-problem in the tree, and the paper should be designed for this kind of approach. Actually only the very rare reader will descend into all the full technical detail of the paper – it is only when implementing the paper that someone typically does this. Even when implementing and going all the way to the leaf nodes, it is crucial to explain the problem first, then the solution. This approach situates the technical material into the larger ark of the paper, and helps people attach it to a larger structure in their brain about how the paper is put together.
I like sentences like “In order to make the robot do X, we need Y and Z.” Then explain that bit. Then “In order to do Y we need A and B.” The whole paper should fit into a tree of these sentences, where the root of the tree is the overall motivation/main technical contribution for the paper.
A common failure mode, of what not to do, is to put “preliminary” or “background” material into an introductory section of the paper, that reads like a laundry list of random techniques. When a background section makes sense, it should be situated as part of the tree of problems and solutions for the overall paper, but that this particular section is not novel to this paper. It should still be connected to the rest of the paper as a tree of problems and solutions. It should not be a laundry list of technical material that will come up later, because people won’t know what it’s for at this point in the paper, and won’t remember it.
Results
For the results section, I like to start with a sentence describing the aim of the evaluation. This sentence should tie things back to the motivation set out in the introduction and the abstract. The results section should deliver on the promises made in the introduction. What did the robot actually do, that it couldn’t do before? How do we know that it did it? When reading a paper, often I will skip to the results section, especially if i am confused about the technical approach to find out what the robot actually did. This paper has a good example of this approach. For each subsection or set of experiments, open it with the sub-hypothesis that that part of the evaluation is showing. A common pattern is to have some results in simulation, and a demonstration on the real robot; explain how each piece of the evaluation demonstrates the robot’s new capabilities. Make sure that the metrics you are using are as easy to understand as possible. For example you could report “reward” but that requires the person understands and agrees with your reward function; it’s often better to report “accuracy” whether the robot did the right thing or not over many trials.
Avoid writing a description of one experiment, then a description of another experiment, then the results of the first experiment and then the results of the second experiment. After understanding what the first experiment is, give the results right away while it is still in the reader’s head. There is no reason to wait to give those results, when they will forget the details of the experiment before they get to the results. Moreover hopefully your separate experiments each tell a different part of the story about how the system works.
登入境外网络加速软件
The first paragraph of the conclusion should summarize the contributions of the paper. This is different from the abstract in that you want to explain what you did in the paper: what results have been proved or what algorithms have been created, and what demonstrations have been done to show that it worked. The later paragraphs should cover future work. This part of the paper can recognize limitations of your paper and sketch out open problems that remain to be solved. It is okay if you do not plan to solve them; you are also laying out boundaries for what this paper does not solve, which helps people understand what it does solve. Moreover you are also describing possible follow-on work that may inspire someone to expand on this paper.
Tenure!
登入境外网络加速软件Stefanie Tellex
I am proud to announce that I have been promoted to Associate Professor with Tenure. This milestone would not have been possible without the help and support from my wonderful students and collaborators, as well as my friends and family. This acknowledgement is an accolade to them as much as it is to me. Looking forward to many more years of language and robots at Brown!
Read the full CS blog post!
Abstract Product MDPs
PublicationStefanie Tellex
Natural language instructions issued to robots convey intent with multiple layers of spatial abstraction. These layers include high-level goals (“fly to the end of the street”), low-level specifications (“go ten meters south then ten meters east”), or a hybrid (“fly to the lamppost on Hex Street”). In addition, language can express constraints on reaching a goal, such as 国外网站打不开怎么办? - PC下载网—官方软件下载大全|绿色 ...:2021-4-25 · 1 14岁男孩被逮捕 原因竟然是写出了一个比特币勒索病毒! 2 网易手游《终结者2》开启双端测试 支持的机型大增! 3 顺丰快递一快递员趁大人不在家猥亵16岁少女! 4 发布《滴滴已死》文章公众号被滴滴索赔165万 无良自媒体颤抖吧! 5 全球首个“无现金国家”诞生 不是中国啊,你猜是哪里?Current robot autonomy systems struggle to understand instructions above the lowest layer of spatial abstraction.
Our recent paper published at RSS 2023, titled Planning with State Abstractions for Non-Markovian Task Specifications, approaches this problem with Abstract Product Markov Decision Process (AP-MDP), a novel framework which enables the robot to understand higher-level spatial abstractions in natural language. This framework leverages a deep-learned model to translate natural language into Linear Temporal Logic (LTL), a symbolic form the robot can understand. From LTL, we create motion plans for the robot aligning with goals and constraints spanning different layers of abstraction.
While AP-MDPs were initially tested with a small indoor robot, we realized this framework generalizes to more complex robots and environments. We connected AP-MDPs to a Skydio R1 drone and gave natural language instructions to fly the drone around Brown’s campus! The success of this flight has led to exciting new verticals of research for the lab.
Here’s the video:
Object Oriented POMDPs
登入境外网络加速软件Stefanie Tellex
The human mind does not see a visual scene as pixels; rather it is able to pick out objects of interest and selectively reason about them. Although object-based reasoning represents a core feature of human reasoning, only recently has it been possible to study in robotics due to advances in computer vision for segmentation and classification of objects from camera images. Our recent paper published at ICRA 2023, titled Multi-Object Search in Object-Oriented POMDPs, takes a stab at the problem by solving a novel multi-object search (MOS) task: without knowing in advance where the objects are located, a robot must find them in an indoor, roomed environment. We formulate the MOS task within a new framework called an object-oriented partially observable Markov decision process (OO-POMDP). An OO-POMDP represents the state and observation spaces in terms of classes and objects. The structure afforded by OO-POMDPs supports reasoning about each object independently while also providing a means for grounding language commands from a human on task onset. A human, for example, may issue an initial command such as “Find the mugs in the kitchen and books in the library,” where a robot can associate the locations to each object class so as to improve its search. We show that OO-POMCP with grounded language commands is sufficient for solving challenging MOS tasks both in simulation and on a physical mobile robot, which has applications for rescue- and home- robots.