Assignment 4 Report


Can-I-Park - Signage help app

Have you ever had trouble understanding a parking sign? Can-I-Park is an augmented reality tool to help you understand parking signs. It will take the hassle of reading parking signs and explain when you can park in a simple and easy to understand interface.

Description of the application

Parking signs are well known for being confusing and difficult to understand - so much so that the ABC wrote an article about it: Can you decipher these confusing parking signs?.

Some parking spaces have many different rules at different times of the day. Mis-understanding a parking sign can lead to getting fines of over $80 in Hobart (Hobart City Council, 2018). There are 2487 metered and unmetered time-restricted parking bays within the Hobart region alone (Hobart City Council 2021).

Figure 1: Confusing parking signs (image from ABC News)

Can-I-Park will unburden the user from the information overload that parking signs present, simplifying the information down to an easy to understand “yes” or “no” you can park and for how long.

The ‘static’ nature of parking signs can lead to the confusion, with many different times and rules often being displayed on the one sign or pole. Having an augmented reality solution allows signage to adapt to the current situation, no longer requiring any additional information than is required for that time.

Because augmented reality offers a ‘see-through’ interface, that is an interface where you can still see the real world behind -is best suited to since users may be walking/parked in their car and still need to be able to see their surroundings.

Other interface options are less suitable:

  • Virtual reality is unsuitable due to the user needing to drive (or have recently driven) whilst using the app
  • A non-AR app may suit, but will be less intuitive to identify a parking sign without using the camera
  • Printed material is static and unable to adapt to changes or user behavior/history

Description of interface solution

The solution will operate as a mobile application designed for mobile phones (with a future vision of being suitable for mixed reality glasses and similar devices).

Users should be able to hold up their phone (or look at a sign with mixed reality headwear) and the app will then show the user if they are able to park or not.

If they are able to park, it will show them how long they are able to park for. In an ideal implementation the app may then offer the ability to pay for your parking space if it is a metered space.

If they are unable to park, then the app will highlight why and suggest other parking locations within the area to try. In a perfect implementation, the app would show available parking spaces nearby by connecting with the EasyPark parking system.

Software requirements:

  • Programmed knowledge of parking regulations and sign variations (from sources such as Common parking signs - City of Hobart, Tasmania Australia)
  • Ability to highlight relevant parts of the parking sign
  • Text recognition/optical character recognition to read in data from the parking signs*
  • Previous data on the location of all parking meters, this will drive a map like view to show where other parking spaces are*
  • Connection with EasyPark/CellOPark to show available parking spaces and the ability to pay*

*not included in the prototype

Hardware requirements:

  • An android smartphone (iOS would be supported in the final release) with:
    • a camera 
    • GPS or geolocation abilities
    • Android 5.0+
  • Or, a mixed-reality headset or smart glasses display*

*not included in the prototype

Interaction design

Scanning a parking space

On opening the app, the user is presented with a camera view so they can immediately start scanning parking signs without any interaction required. The app needs to work fast, even at a distance since people may be using the app from within a parked vehicle to a sign located a short distance from the user.

Once a sign has been detected from the camera, the app will begin reading the sign by using optical character recognition (the initial prototype will use preset sign images). The app will then display a clear indication on if you can park, as well as any additional information - for example if a permit is required or vehicle signage for a loading zone.

If you are unable to park, a red symbol will appear next to the sign to indicate the parking space is not available for your intended duration. 

If you are able to park then the app will display a green circle next to the parking sign.

Final versions of the product may also include audible cues and haptic feedback to help visually impaired users comprehend the sign. An example of this being useful may be a couple, where the passenger is scanning signs for the driver from within the vehicle.

Reading the screen in a moving vehicle may be difficult (especially if you have any visual/cognitive impairments) so sound alerts and vibration cues would help to provide immediate feedback to the user.

Setting intended parking time

The user is able to select how long they wish to park for via a slider at the top of the app. This defaults to 1 hour on the prototype but in the future we’d like Can-I-Park to calculate a default time depending on factors like:

  • the location of the user
  • previous parking behavior
  • aggregated data collection from other users
  • connection with calendar data
  • research, and more

A user who is using the app near a hairdressing salon may see a longer default time than someone parking out the front of a butcher - because the app would anticipate a longer parking time is required for a potential hairdressing appointment.

For example, a Hobart street parking study undertaken in 2000 found that the average parking lengths varied from 5 to 55 minutes depending on the location of the parking space, as well as other factors like neighboring businesses (Douglas 2000).

Map view of other nearby parking spaces

The user is able to tap a button to view a map of other nearby parking spaces. Once the button has been pressed, a map will overlay over the other interface elements and show other available parking spaces within a few hundred meters of the user.

In the final release this would likely require a partnership with parking vendors/platforms like EazyPark and CellOPark for Australia.

Viewing parking rules, parking permits and other information

The user is able to tap a ‘learn more’ button at the top of the app to open an information screen. This screen would explain how parking works in the region to the user, this may be particularly helpful for tourists and people that aren’t familiar with an area's parking rules.

There is potential to partner with local councils to show more information about parking in the area, such as quick links to view rates, apply for permits and more.

Initial technical development

Can-I-Park is built using Unity Engine utilising the Vuforia Engine library for the augmented reality functionality. Newer versions of Vuforia Engine no longer support character recognition, so the prototype is only trained to look at various pre-calculated parking signs varieties - the final version of the app is intended to use a library capable of optical character recognition and basic machine learning to understand any parking sign.

The prototype has been built to run on an Android smartphone and has been built to recognise 5 different parking sign images.

To set the intended parking time the user can slide the slider at the top of the screen. This is using a Unity canvas and a slider element. The slider allows a range between 15 minutes and 8 hours to be chosen.

The ‘Day’ and ‘Time’ UI components are purely there for easy testing/debugging, allowing the user to change the current day and time to any value of their choosing. The final product would not show these and instead inherit the current device time.

In the later stages of the prototype we will implement icons (like a clock for the time and a map to show the nearby parking spaces) and correct string formatting (to show 3 hours 15 mins) to make the interface easier to comprehend.

3D models

Sign by Zsky [CC-BY] (https://creativecommons.org/licenses/by/3.0/) via Poly Pizza (https://poly.pizza/m/9UeVfOHm6k

A low poly model of a sign. In the finished product, a modified version would be used to display parking sign locations on a map.


Low poly pin thumbtack that may be used to place markers on the map screen to show potential areas to check for parking. Or to highlight where meters are to pay for your spot (if the user chooses not to pay with the EasyPark intergration.

Thumbtack by Kevin Lim [CC-BY] (https://creativecommons.org/licenses/by/3.0/) via Poly Pizza (https://poly.pizza/m/dZpKQs9NgKE)

Conclusion

Can-I-Park provides an intuitive interface, allowing the user to easily set their desired parking time and then through the power of augmented reality can allow the user to decipher often confusing parking signs into a simple answer - can I park here? Yes or no.

References

Douglas, James. 2000. “Parking Standards & Provisions Review.” https://stors.tas.gov.au/au-7-0099-00017$stream.

Hobart City Council. 2018. “Parking By-law.” City of Hobart. https://www.hobartcity.com.au/files/assets/public/trimfiles/by-law-review-2018/parking-by-law-no-5-of-2018-signed-and-sealed-24-july-2018.pdf.

Hobart City Council. 2021. “CITY PARKING FACTSHEET: Parking in Hobart.” City of Hobart. https://www.hobartcity.com.au/files/assets/public/city-services/parking/factsheet-parking-in-hobart-july-2021.pdf.

Get KIT208 - AR Parking Sign Reader

Leave a comment

Log in with itch.io to leave a comment.