Last week I wrote that if we want to see a major increase in bicycling in the U.S., we are going to need to get mathematical about it. (This morning provided some vindication for the idea in the form of an article about a team of mathematicians who are doing something similar with public health.)
The problem of applying math to bicycle transportation, though, is that our data sucks.
We aren’t unlocking the secret to bicycling success in the US, and that’s in part because we aren’t asking the right questions, which is in turn because we are restricted to the data available. Mostly this is the census data, a woefully inaccurate and inadequate metric that includes only a single primary mode of transportation used for one kind of trip: commuting to work.
Far more useful would be metrics such as bike trips, or miles biked, as an alternative to counting — and then focusing our advocacy efforts on — individuals who identify as “cyclists.” Even more useful would be to look at where exactly those trips are made — what kinds of routes are taken and what sort of destinations are served, as well as the demographics of the people making the trips.
That’s where the science of counting bikes comes in.
Bike counts already happen across the nation, but the data they produce isn’t ideal, according to Krista Nordback, a PhD student studying bike counting at the Center for Sustainable Infrastructure Systems in Denver, who we met several years ago on tour.
Nordback has since been working on devoloping ways to make bike counting more accurate. We recently got back in touch, and she shared some of her findings:
“I’m learning that the way we typically count bikes in this country (by manual counts during peak hours once or twice a year) is highly inaccurate, misleading, and needs to change (if you’re interested, see this blog post on the topic). A recent report by the Swedish Transportation Institute (VTI) says we need hourly counts for at least 2 to 4 weeks at each location. That means that we need to focus volunteer efforts less on counting and more on maintaining, checking and moving automated counters. That will be a big shift of focus. Fortunately, such automated counters are being installed more around the country, but we have a long way to go.
A hopeful sign is that FHWA is preparing to begin allowing bicycle and ped data to be uploaded to their traffic data warehouse where all the motor-vehicle counts live. This is a huge step forward! Work is still being done to determine which format this will need to be in, but it’s a much better situation than the essentially volunteer National Bicycle and Pedestrian Documentation Project since it will be maintained at the federal level and included with the rest of the traffic data. It’s also the first step toward requiring states to submit such data in the future.
It’s an exciting time for bicycle counting in the US!”
She concluded (emphasis mine): “We need these counts to be accurate so that we can track cycling on our paths and roads, in terms of changes from year to year, facility use, and as a basis from which to study cyclist safety. Getting these numbers right provide the basis for much of the future analysis that will be done on cycling. If we don’t get our numbers right, future analysis based on erroneous numbers may lead to incorrect and dangerous conclusions.”
If you’d like to learn more, see below for the nerdy details of Nordback’s preferred bike counting strategy:
I created two methods for estimating annual average daily bicyclists (AADB) based on time and weather variables. The first is a factor method based on the Traffic Monitoring Guide methods for motor vehicles in which daily and monthly adjustment factors are calibrated and applied to short term counts. The second is a statistical model which combines the strength of the factor method in dealing with categorical variables like time of day, day of the week, month, and year and the strength of a statistical model to deal with weather variables. Both divide locations into two groups: those with clear commute patterns and those without. The second model provides the most accurate estimates of the data tested.
The error associated with both models is least when estimates are based on one or more weeks of continuous count data (less than 30% average absolute percent difference for counts taken all year round). However when based on only one hour of bicyclist counts, the error in predicting AADB can be prohibitively high, with average absolute percent differences above 60%. While inclement weather does not seem to be strongly related to high error, low error is associated with estimates made based on short term counts collected from July through October. For these months even when only three peak hours of counts are known there is less than 20% error as measured by average absolute percent difference between actual and estimated AADB. This work thus recommends that short term counts should be collected during the months of July through October and should ideally consist of at least one week of continuous counts. If such data cannot be collected, this work reports error associated with such estimates and provides a way to estimate confidence intervals.
So the take home messages are counts from July through October are likely to result in better estimates (based on Boulder data at least – probably different in CA!). For practitioners: If you use the National Pedestrian and Bicycle Documentation Project recommendations, at least count at each location on all 3 days, not just one day. One or two peak hour counts is simply not enough to reduce your error to reasonable levels. Ideally, skip the manual counting all together and gather at least 1 week of counts at each location using a portable continuous counting device.
In conclusion, it looks like we are on the verge of a revolution in the way we count bicycling — which opens up the possibilities for creating smarter, more effective strategies for increasing the number of bicycle trips being made.
Update: Portland is apparently soon to get its first Copenhagen-style electronic bike counter. Cool.