Archive for the ‘FLL’ tag
Teacher Appreciation Week is May 6th – 10th and we are celebrating! We LOVE all teachers and appreciate everything they do for their students! Here at the Robotics Academy, we have a special place in our hearts for robotics teachers, mentors and coaches, so this year we want to make sure they get the attention they deserve.
Do you know an amazing robotics teacher, mentor, or coach? Let us know who they are and why they are AWESOME! Send us your best story, pictures, and/or video about this person to firstname.lastname@example.org. We will share several of these stories on the Robotics Academy blog during Teacher Appreciation Week. And the Top Three Stories, voted by us, will each WIN one Classroom Annual License for Robot Virtual Worlds for their teacher/mentor/coach!
Stories must be submitted by Wednesday, May 8th at 5pm Eastern Standard Time. We will announce the winners on Friday, May 10, 2013.
Please include contact information (name and email/school phone number) for the teacher, mentor, or coach that you’re writing about so we can make sure to get their permission to publish their name on our site. You can send any questions to email@example.com.
Part 4 of 4: Success in Different Forms
This is the final article in our 4-part series investigating math in LEGO Robotics Competitions. Part 1 introduced the past research and the context of the present investigation — a local LEGO robotics competition where investigators conducted interviews about team strategies. Part 2 laid out the range of strategies that were observed, and teased out an interesting result that teams using math-based strategies seemed to have widely varying success in the competition, with math-users both leading the pack and trailing at the rear. Part 3 looked in depth at the winning team and found that purposeful use of mathematics was central in both their programming and overall planning strategies.
But what about the teams that used math, but still scored low? Would it have been better for them to choose a non-math-based strategy?
Focus Team Surveys
As mentioned in the previous article in the series, the research team met with four Focus Teams outside of the competition, hoping to gain greater insight into their solution strategies and what they got out of participating in the competition. Each of the four Focus Teams completed two surveys before the competition, and the same two surveys again after the competition. The first survey consisted of 12 test-like questions that asked the students to solve problems involving robot motion (e.g., how many motor rotations are required to make this robot move this distance?). The second survey, however, measured students' attitudes towards robotics and mathematics, including questions about their level of interest in robots and math and also their view of how valuable math is for doing robotics.
Who were the Focus Teams?
Two of the Focus Teams consisted of students from elementary grades, and are codenamed Team E1 and Team E2. The other two consisted of middle school age students, and are identified as Team M1 and Team M2. Table 1 shows the teams, their grade levels, the number of students on their team, the strategy the team used for their first move, and their rank and best score from the competition.
Perhaps not surprisingly, the middle school age teams outperformed the elementary school age teams in the competition as evidenced by their much higher ranks and final scores. This is not a universal effect, however, as there were a number of elementary school age teams who did do very well in the competition (ranked #5, #7, #9, & #12 out of 22 teams). Unfortunately none of those teams were Focus Teams, and so did not take the surveys.
Among the Focus Teams, two (Teams E2 & M2) used the math-based Calculate-Test-Adjust strategy for their first move, and the other two teams (Teams E1 & M1) used a non-math-based strategy. This provides a nice contrast to explore the effect of using math in a team's solution. The conclusion from Part 2 does still standout – using a math-based strategy leads to high competition scores in some cases (Team M2), but not in others (Team E2).
But there may be more to tell about the teams than just their competition scores. What about the surveys?
The Learning Benefits of Using a Math-Based Strategy
Figure 1 shows the results from the robot math knowledge survey administered to the Focus Teams. The middle school age teams (Teams M1 & M2) have higher scores overall than the elementary school age teams (Teams E1 & E2), which is not surprising. The older students have more experience with mathematics in general, and it shows when they solve formal problems that make use of math.
However, a more interesting pattern can be found by looking not just at the scores, but at the gains. The two teams that used the math-based Calculate-Test-Adjust strategy (Teams E2 & M2) both improved on their survey scores from the beginning of the competition to after, but the teams who used a non-math-based strategy (Teams E1 & M1) did not. This suggests that regardless of a team's initial level, using math in an explicit way in the competition solution improves student use of math when solving more general problems relating to robot movements.
If increasing students' problem solving abilities using math is a goal of the robotics team, then just attempting to use a math-based strategy may have real advantages, regardless of how it impacts the team's overall success in the competition.
The Attitude Benefits of Using a Math-Based Strategy
The second survey measured students’ attitudes toward math and robotics in general. Figure 2 below shows the results from each Focus Team for this survey. Only 1 of the 4 Focus Teams had more positive views in each part of the attitude survey after the competition compared to before, and that was Team E2. Team E2 was the elementary school age team that used the math-based Calculate-Test-Adjust strategy in their solution. For this team, the experience preparing and competing in the competition did have a positive impact on their interests in robotics and mathematics, as well as their views about the value of mathematics in robotics.
Remarkably, this positive change in attitudes was attained in spite of the fact that Team E2 did not score highly in the actual competition (ranked #17 out of 22 teams). This result echoes a statement by a number of other coaches who, in the day-of-competition team interviews, stressed that they were participating to provide their students with a positive experience in robotics, not to win the competition. Perhaps it worked. It appears, though, that by using mathematics in the robotics competition, attitudes toward math itself get caught in the updraft, and benefit as well.
There are number of positive outcomes that result from participating in a robotics competition. Better problem solving and more positive attitudes toward robotics and mathematics are two outcomes that appear to be attainable. These are in addition to, and possibly even preferable to, performing well in the competition itself!
Conclusion #3 – Even when a team's use of math doesn't lead to success on the challenge, just attempting to use math can have other benefits in terms of improving students' understanding and developing more positive attitudes about math and robots.
This series has been about finding out what makes successful teams successful. Being older, more experienced, and better at math sure seem like advantages for teams in a robot competition. But those are hardly the point, or even the whole story.
The use of mathematics in solution strategies, however, is very much to the point. Every coach has this option, and it appears to pay off in both tangible and intangible ways. A team with a high degree of fluency in mathematics can apply math in creative ways to springboard themselves to the top of the charts.
A team that is less comfortable with mathematics but commits to using math anyway sets itself up for a different kind of success – real, measurable gains in student problem-solving capability and attitudes toward robotics and math. If, in trying to create more systematic solutions, students' failed attempts actually help them to understand more about the way the robots work, they will be able to apply those improved understandings to future problems. If, in the challenge of attempting to use math, a student comes to understand the role or context of mathematics better, it makes both robotics and mathematics more interesting, and helps the student to see math as having real, usable value in robotics and the world.
And that certainly sounds like a success any FLL coach would be proud to report, trophy or no.
Thank you for reading our series on the benefits of math in LEGO robotics competitions. We hope that you found some useful information to think about when working with your team this year for the upcoming FLL competition. Please leave comments to let us know your thoughts on our articles and on the use of math in educational robotics more generally. Also, consider volunteering to help us in our next investigation. There are still many open questions about what helps a team be successful, and we hope to continue to investigate those questions and share what we find with the FLL community (send an email to Eli Silk if you are interested).
In actuality, five teams agreed to be Focus Teams, but one team’s data was incomplete and therefore not fully counted. The patterns observed do continue to be true even if the incomplete data is left in.
The number of students on the Focus Teams reported in the table is the actual number of students who participated in team activities. This is sometimes different than the number of students who completed the surveys. The surveys were only given on a particular days, on which some students may have been unavailable. The number of students who completed the surveys for each Focus Team is shown in the x-axis of the results figures for the surveys.
Part 2 of 4: The Range of Strategies
Part 1 of this series set the stage for an investigation into student mathematics usage at a local LEGO robotics competition. In Part 2, we'll take a look at the types of strategies that teams came up with for solving the challenge, and how those different approaches fared in the competition.
Interviews with the Teams
22 teams from the greater Pittsburgh area participated in the 2010 May Madness robotics competition. Investigators from the University of Pittsburgh's Learning Research and Development Center (LRDC) and Carnegie Mellon University's Robotics Academy (RA) were able to interview 16 of them about their team sizes, grade levels, and experience levels for both students and mentors. They also asked teams to describe their solutions to the challenge and how they came up with those solutions.
The Different Strategies
As expected, different teams came up with very different solutions. In fact, they were so different that apples-to-apples comparisons became nearly impossible at the whole-strategy level. Fortunately, every strategy did include one common component: moving the robot to the center of the board to begin scoring points.
That only 3 teams used a (non-rotation) Sensor-Based strategy is likely a direct consequence of the nature of the Botball Hybrid II challenge. In particular, the toilet paper tubes were not steady enough for a robot’s touch sensor to contact them without tipping the tubes over. As a result, teams seeking to score using the tubes had to choose non-contact means of controlling their robot's movement. The 3 teams that did use a Sensor-Based strategy on their first move were all going for the nests, which are much heavier than the toilet paper tubes. However, for various reasons, even these teams abandoned use of their sensors in their moves later in the challenge. In addition, the board surface featured few marked lines, making line-following and line-tracking less attractive.
A Math-Based Strategy for Calculating Motor Rotations
The remaining 13 teams programmed their initial move using the rotation sensor, effectively moving a set distance forward. However, those 13 teams used decidedly different methods to choose their motor rotation values, especially the initial value. Some teams guessed; others used the view mode; but four teams chose to start with a math-based prediction.
These groups all ended up using variants of a three-phase strategy called Calculate-Test-Adjust:
- Measure the how far the robot has to move and use mathematical means to calculate a (theoretically correct) rotation value for the movement
- Run the robot with the predicted value
- Compensate for any observed overshoot or shortfall by making small "tweaks" to the rotation value
Students used several different mathematical relationships to arrive at their predictions. For example, one group measured how far the robot moved forward with each motor rotation, then calculated how many of those 1-motor-rotation distances the robot needed to move the total distance to the target. The students then entered this value into their program, tested it, and fine-tuned the value to get the robot to exactly the right spot.
One notable quality of this strategy is that it is not purely mathematical – all 4 teams that used Calculate-Test-Adjust for their initial motor rotations value ended up having to refine their value with guessing or with the view mode afterward. A math-based calculation does not appear to be sufficient on its own for this type of problem.
The Relative Success of the Different Strategies
So how well did these math-using teams fare compared to their sensor-using, guess-and-testing, and view mode-ing peers? The 22 teams were ranked based on their best point score after 3 rounds of the competition. Figure 1 below shows the average rank of the teams who used each strategy. Bigger bars indicate a higher average ranking for the teams using that strategy — meaning teams who used that strategy had better scores in the competition.
Looking at the data in this way, the View-Mode strategy was the most effective and the Sensor-Based strategy was the least effective. The Guess-Test-Adjust and the Calculate-Test-Adjust strategies seem to be in the middle and similar to each other. Given that this particular challenge was somewhat biased against the use of sensors, it probably makes sense that teams who used the Sensor-Based strategy did not fare well. But what of the others?
The View-Mode strategy did seem to do particularly well. The investigators theorize that this strategy leads to success for two reasons. First, teams that use this strategy can program their movements quickly. Figuring out the correct motor rotations value is straightforward and fast, so that frees the team up to spend their limited time improving other parts of their solution (e.g., making their robot base solid and their attachments functional). Second, the View-Mode strategy is very reliable, so once teams get a motor rotation value by using this strategy, they then have a lot of confidence that that value is the right one and will work well. In essence, the View-Mode strategy is easy to implement quickly and gives very reliable results, which explains why teams who chose that strategy tended to do well in the competition.
The Success of the Math-Based Strategy
Compared to rolling the robot on the ground and reading a number, both Guess-Test-Adjust and Calculate-Test-Adjust are slow to implement and potentially less reliable as well. And in the results, teams who used these strategies did okay in the competition, but not as well as teams who used the View-Mode strategy… case closed. Right?
Averages, it turns out, don't tell the whole story. Calculating the standard deviation of the ranks gives us a sense of how tightly clustered these different success levels are for each strategy. If everything were cut-and-dry, we'd see all the View Mode teams clustered at the top, followed by the test-and-adjust teams, and sensor-based teams at the bottom.
Instead, when we add the standard deviation as error bars on the previous bar plot of the average ranks (see Figure 2), some things fall into place, and others fly loose. The View-Mode strategy was the least variable – teams using it were tightly clumped in the rankings – further supporting the idea that that strategy is straightforward and reliable. But the Calculate-Test-Adjust strategy has a huge variability (the error bars span almost the entire range of possible ranks)! Something important remains untold.
In fact, a closer look at the 4 teams that used the Calculate-Test-Adjust strategy shows that 2 of them were the top ranked teams in the entire competition (ranked #1 and #2 out of 22 teams). This suggests that using a math-based calculation strategy can be very powerful. At the same time, the other two Calculate-Test-Adjust teams were #17 and #21 out of 22 in the rankings – the complete opposite end of the scoring spectrum.
This dramatic separation in performance suggests something powerful (see Figure 3). Perhaps it is not enough to simply use a strategy; the result may hinge dramatically upon the strategy being used right. Perhaps when the Calculate-Test-Adjust strategy is implemented well, it is just as quick and just as reliable as the View-Mode strategy, if not even better. Done without a full understanding, however, the calculations could turn into distractors.
The research team theorizes that teams who are fluent with mathematics can use math-based calculations to their advantage by determining the correct motor rotation values for different moves relatively quickly. As with the View-Mode strategy, this time savings frees resources for use on building tasks and fine-tuning overall strategy. Teams that are less fluent in mathematics, however, would take longer to perform the math-based calculations, and make more errors, thus taking time away from working on other important parts of the task.
Conclusion #1 – Not many, but some teams do use math. And of the teams that do use math, there is widely varying success, from some of the most successful to some of the least successful.
Overall, teams found a range of ways to approach the challenge. Different challenges may favor different types of
strategies, but the May Madness event saw a variety of approaches employed. Some strategies did seem to lead to better success in the competition. In particular, the View-Mode strategy seemed to be very successful for teams, presumably because it is quick and reliable. Not many teams chose to use the math-based Calculate-Test-Adjust strategy, but those who did ended up with both the highest scores in the competition, and some of the lowest scores. This suggests that for the math-based strategy more than any other, it matters not just that a team used that strategy, but how they used it.
Fortunately, in addition to the day-of-competition interviews, the research team also met with a few of the teams outside of the competition to understand their solution strategies in more depth. The winning team in the competition was one of these. Did using math really help this team be successful? And if it did, then how? Continue on to Part 3 to find out about the winning team's strategy and their use of math.
There was one other strategy that teams used to determine how many motor rotations to use in their program. We call this strategy the “Overshooting” strategy because it works in situations where it isn't critical that the robot moves a particular amount as long as the robot moves far enough. For example, when approaching the nests it was okay if the robot went too far because it would just push the nest forward a bit, but the nest would still be in a position where it was easy to grab. This strategy didn't work with the toilet paper tubes, because if the robot went too far and bumped into them, they would fall down and would then be much harder to grab. In cases when overshooting was acceptable, teams were able to choose a motor rotations value that was safely big enough without having to worry if it was exactly right. No team used this strategy on their initial robot movement and teams were more likely to use it when programming the manipulators, so we didn't include it in our primary list of strategies.
One could argue that the rotation sensor is a sensor like all the others. In particular, the programming logic is the same, so a strategy that used the rotation sensor could also be labeled Sensor-Based. But here we think the distinction between the rotation sensor and the other sensors (e.g., touch, ultrasonic, and light sensors) is meaningful as the rotation sensor is the only one that will make the robot move with little regard to what is out in the world. Strategies using the other sensors will move varying distances depending on the way the objects in the world are configured, but the rotation sensor strategy (within some error) will always move a consistent amount.
Part 3 of 4: A Winning Strategy
This is Part 3 of a 4-part series investigating how math may help in LEGO Robotics Competitions. Part 1 introduced the past research and the context of the present investigation — a local LEGO robotics competition where investigators conducted interviews about team strategies. Part 2 laid out the range of strategies that were observed, and teased out an interesting result that teams using math-based strategies seemed to have widely varying success in the competition, with math-users both leading the pack and trailing near the rear.
So what was it that the most successful teams did that led to their success? In part 3, we take a look at the winning team's strategy and see how they used math to great effect.
A Focus Team
In addition to short, standardized interviews with teams on the day of the competition, the research team also sat down for more in-depth interviews with four robotics teams outside of the competition, hoping to gain greater insight into their solution strategies. Two of these Focus Teams were composed of middle school aged students and two of elementary school aged students. One of the Focus Teams – codenamed M2 – happened to be the team that won the competition.
Team M2 used a math-based Calculate-Test-Adjust strategy for their first move. They were one of only four teams (out of 16 interviewed) who used a math-based strategy. But Team M2's overall solution was fascinating and worth sharing as there is so much that can be learned from what they did.
Team M2 was a school-based team consisting of 10 students, all from a gifted program in a suburban school. There were one 8th grader, six 7th graders, and three 6th graders. Four of the students had been to a competition before, but the rest were rookies. Their coach, a gifted teacher from the school, had been a coach for five previous robot competitions, so she was very experienced. They reported spending about 17 total hours preparing for the competition, with about 10 of those hours in just the last two weeks. This was, in fact, on the low end of total preparation time compared to other teams that were interviewed. Team M2 met during normal school hours, when the gifted teacher was able to pull the students from their regular classes, which may have constrained the amount of time they could meet.
Team M2's Robots and Game Strategy
Team M2 was large enough and had multiple robots, and so were able to split into two sub-teams. They divided the task into missions, with one sub-team working on the toilet paper tubes and the other sub-team working on the nests. They built one robot according to the Robotics Educator Model (REM) given in the LEGO® instructions, although they adapted it by substituting larger wheels. They also built a second robot entirely from scratch. The REM robot had two different attachments: one for collecting the toilet paper tubes and the other for loading and transporting the ping pong balls to the gutter and the empty tubes to the end zone scoring area. The second robot was used to retrieve the nests. They designed this second robot from scratch because they felt they needed a robot that was heavier than the REM robot design in order to effectively pull the nests back. Below are photos of Team M2's robots and attachments. These robots as a whole were not very complex, but each robot design and attachment was well-tuned to specific parts of the challenge.
Team M2's Winning Round
Team M2 ended up with a high score in the competition of 91 points. See below for a video of their winning round. It is clear from the video of Team M2's robots in action that all of their movements are quick and reliable. They retrieve all three toilet paper tubes very fast and without any fumbling. As mentioned in Part 2, the research team suspects that this is because Team M2 was able to use the Calculate-Test-Adjust strategy to make efficient calculations that got them close to correct motor rotation values very quickly. The time savings allowed them to work on other aspects of the challenge, such as ensuring that both of their robot designs were robust and reliable. This too, shows clearly in the video, as Team M2 uses their multiple robots and attachments to clear advantage. In general, Team M2 is a great example of an efficient and focused team that produced a high-quality solution.
Team M2's Other Math
Team M2 did use Calculate-Test-Adjust, a math-based strategy for movement, but perhaps the most exceptional aspect of Team M2's strategy was a completely separate use of mathematical thinking. One of the students on Team M2 did a systematic analysis of the points that the team could get based on observations of their practice rounds. She measured the time they took to complete each mission and the points that they could get, and then identified the best ordering to help maximize their total points. She determined that their team could get the toilet paper tubes (and all 9 ping pong balls contained within them) back to base then deposit the balls into the gutter and the tubes into the end zone in 52 seconds for a total of 57 points. Then they would still have time to pursue the nests for additional points. In their winning round (see the video above), they execute this strategy almost perfectly, although a later mission ends up knocking one of their toilet paper tubes from the end zone scoring area.
Although the team no longer had documentation of their analysis when interviewers met with them after the competition, the research team attempted to recreate it in Table 1 to illustrate how powerful such an analysis can be. When the points are broken down in this way, it is clear that the large majority of points are to be gained by going after the ping pong balls, half of which are in the toilet paper tubes, and putting them in the gutter. And this is exactly what Team M2 did, doing so very efficiently and reliably. Thus, Team M2′s use of mathematics extended beyond programming into the planning process itself, and appears to have paid off very well.
Conclusion #2 – The most successful teams do use math purposefully and efficiently, and their math use is a prominent factor separating their solutions from the solutions of the rest of the teams.
It seems reasonable to think that Team M2 was an exceptional team, with some previous competiton experience among its team members, students who were generally considered smart and good at math, and an experienced mentor. Nevertheless, it also seems clear that a big part of Team M2's success was a direct result of their use of math in their solution strategy, and that their math use gave them real advantages at multiple levels.
A team that uses math effectively to quickly zero in on correct motor rotation values in their program can save valuable time. That time can then be used to make the rest of the robot more efficient and reliable, or even build a second specialized one to complement the first. In addition, using math as a larger strategy to do more systematic analysis of the points breakdown and the effectiveness of different mission solutions can have a major impact on a team's maximizing its performance at the competition.
In reality, of course, not every team will walk in the door with the background to apply math as effectively as Team M2. After all, the observations in Part 2 show that there were teams who tried to use a math-based strategy but ended up performing poorly, and that the View-Mode strategy, which doesn't include any math, was the most straightforward, reliable, and effective strategy on average.
Is math only a strategy that should be pursued by “elite” teams and students, then? Of course not! On the contrary, the final article in this series will provide evidence that the use of math in robotics competitions can produce winners in different ways… and more importantly, it can help produce the type of winning that will last long after the competition is over!
Part 1 of 4: Introduction and Background
Every September, thousands of FIRST® LEGO® League (FLL) coaches and mentors around the world crack their knuckles, dust off their parts bins, and prepare to dive into an intensive three-month odyssey of technical twists and twenty-first century tutelage as they guide their teams to success in the annual FLL competition. But what exactly is success in a world of gameboards and gracious professionalism? Is the highest scorer really the biggest winner? What do students actually gain through their participation? How does it happen? And so, how should the enlightened coach choose from the multitude of competition strategies that lie open as the new season dawns?
Perhaps we can learn something from the recent past.
Since 1999, the Robotics Academy (RA) has been helping teachers, mentors, coaches, and students have positive educational experiences with robotics. Among other things, the Academy develops curricula, offers teacher professional development, and hosts robotics competitions. Recently, the Robotics Academy — in cooperation with the University of Pittsburgh’s Learning Research and Development Center (LRDC) — has been investigating ways to deepen students' experiences with robotics by incorporating math in their activities.
This blog series describes what RA and LRDC researchers found when they interviewed teams at a local LEGO robotics competition, looking to answer a few key questions:
- Are there opportunities to use math in a typical robotics competition problem?
- Does using math have any impact on a team's score?
- Can using math deliver “success” in any other sense?
In short, the investigators found that the answer to the all three questions was an overwhelming yes — there are opportunites to use math in a LEGO robotics competition setting, and when teams do use math it seems to be helpful in ways not limited to points and trophies. Ultimately, the research team arrived at 3 major conclusions:
- Not many, but some teams do use math. And of the teams that do use math, there is widely varying success, from some of the most successful to some of the least successful.
- The most successful teams do use math purposefully and efficiently, and their math use is a prominent factor separating their solutions from the solutions of the rest of the teams.
- Even when a team's use of math doesn't lead to success on the challenge, just attempting to use math can have other benefits in terms of improving students' understanding and developing more positive attitudes about math and robots.
Each remaining article in this series will examine one of these conclusions in detail and describe how we arrived at each one. But first, let’s set the scene.
An Initial Investigation
Since 2000, the Robotics Academy has hosted the FIRST LEGO League (FLL) Pittsburgh State Tournament. Last Fall, the Academy decided to see what it could learn by interviewing participating teams on the day of the competition. Robotics Academy researcher Ross Higashi interviewed a sample of the more than 70 teams that competed in the 2009 state competition and put together two Robotics Academy Blog posts that identified who an FLL team is and the connections to Science, Technology, Engineering, and Mathematics that they make. Although praising the article overall, a comment to one of the posts challenged researchers to go into more depth:
If “a few highly successful teams have shown great adherence to principles of good design”, can Carnegie-Mellon or FIRST make their stories, plans, approaches available more widely to the community? Otherwise, without good examples, it will remain hit-or-miss for the vast majority of the teams.
And so a followup plan was devised. Surely another round of interviews could find good examples from which the whole FLL community could benefit! The research team’s first opportunity to conduct interviews was at a local competition called May Madness, on Saturday, May 8, 2010 at the Sarah Heinz House in Pittsburgh's North Side neighborhood. Although not as large as the FLL regional championship, the May Madness event attracts the same types of teams and uses similar challenges.
The 2010 May Madness competition included a number of different events, including separate challenges for different age divisions, different robot platforms such as VEX and TETRIX, and even a non-robotic Alice storytelling competition. To provide the most FLL-relevant information, the interview team focused on the “Botball Hybrid II” LEGO MINDSTORMS NXT challenge, geared toward elementary and middle school age students.
Although not quite as complex as typical FLL challenges in terms of the number of missions or the variety of objects on the board, the Botball Hybrid II challenge includes a number of elements that require sophisticated solutions. Two teams occupy the board at the same time, a black team and a white team. Each team can have one robot on the board at a time and the teams start at opposite ends of the board. The object is to get the most points possible in a 90-second round. Points are obtained by collecting ping pong balls and toilet paper tubes of the team's color and also common nests and foam balls. Knocking the ping pong balls loose gets some points, but the most points are obtained by bringing the objects back to a team's end zone. Even more points are obtained by lifting the objects into the gutters on the side of the table See gallery of images below for pictures of the game board, the items on the board and the specifications, a list of the rules of the challenge, and the points system.
The Findings and the Future
How did teams try to solve this challenge? Did math come into the picture at any point… and if so, did it help? Should coaches bother encouraging students to try using math in a challenge like this?
Each remaining article in this series will focus on answering one
of these questions. Part 2 of the series describes the range of strategies that teams employed, Part 3 details the winning strategy, and Part 4 discusses some alternative versions of success that were observed.
As you read through the research team’s findings and interpretations, please let us know what you think by leaving a comment on the blog! And if you are planning to attend this year's Pittsburgh State Tournament FLL Competition, the Academy would love to have your team be a part of the next round of investigation (send an email to Eli Silk if you are interested). We hope you find these articles helpful and wish you the best of luck in the upcoming competition season!
In Part 2, we’re going to take a look at what an FLL team gets out of the experience. Every student goes home with a medal for participation, and some earn trophies as well. But what do students really take home with them in terms of learning and experience? Read the rest of this entry »