AI: Here and Now

December 21, 2023, Feature, by Lindsay Collins

0124 feature ai 410

For an enhanced digital experience, read this story in the ezine.

How the rise of artificial intelligence will impact the field of parks and recreation

One thing about artificial intelligence (AI) is clear: it’s here to stay. But does embracing this technology have us headed toward progress or pitfalls — or both?

While AI has existed since the 1950s, the recent surge of generative AI — a subset of the tech that can create stories, essays, images, audio, video and more by learning from existing content — and the ability to interact with it using natural language, as if it were a friend or colleague, has thrust AI into the limelight.

Joe Pitti, deputy director of health and community services for Town of Easton (Massachusetts) Community and Engagement Programming, says originally, he thought AI would only affect people in certain fields, such as technology or higher education. However, after trying ChatGPT — an AI chatbot that communicates through conversational dialogue to provide detailed responses to instruction prompts — he thought, “This is something that probably does not have boundaries in terms of the fields that it’s going to impact. How can we leverage this and find ways to create efficiencies?”

Current Perceptions and Use

While Pitti is excited by the prospects of AI, feelings across the park and recreation field run the gamut. Next Practice Partners, LLC (NPP) conducted a survey during October and November 2023 ­on the perceptions and impact of AI in parks and recreation to determine where the profession currently stands. Among the more than 1,200 responses received nationwide, “the sentiment we’ve gotten is that it’s interesting, and it’s scary, and it’s inevitable,” says Neelay Bhatt, NPP founder and CEO.

Pitti’s team reflects that sentiment: “Some people are really excited — they’re like, ‘This is definitely going to make the work that I do a lot easier.’” On the other hand, he says, “I have certainly gotten the stereotypical, ‘Is this going to automate my job? Is this going to replace me?’” To quell concerns, he says, “You rebrand it, reframe it, put it through a different lens and say, ‘No, I’m encouraging you to use this, and we will use you in a different way.’”

So far, Pitti’s team primarily uses ChatGPT, a widely recognized AI platform that receives approximately 1.6 billion monthly website visitors and set the record for the fastest-growing user base in history for a consumer application. Pitti’s department of three full-time staff members serves approximately 30,000 community members and has used the program “as essentially an infinitely patient intern” to assist with a range of ideation and writing-heavy tasks, he says. “A lot of what our office had been doing prior to my hiring was using paper registrations, a lot of print marketing. The programs were sort of passed on verbally from generation to generation — there were no rule sheets, no emergency action plans,” Pitti explains. “So, a lot of what we use [ChatGPT] for is developing rule sheets that formalize some of our program structures…developing welcome letters for families as an introduction for some of our programs…drafting program announcements, memos, blurbs for brochures, different verbiage for staff trainings, emergency action plans. We’ve used it for quite a number of things.”

Another platform at the forefront for parks and recreation is Placer.ai, which uses location analytics to provide data on the movement of people, such as where they travel to/from and how long they stay at a given location. According to Perry Vetter, parks and recreation director for Edina, Minnesota, the primary questions his department aims to answer using Placer.ai are “really understanding who our users are…[and] how [they are] interacting within our park system.” In the past, staff would rely on information like sales transaction data to determine who was using what resources and amenities. However, sales transaction data isn’t available at every facility, and even the ones that do have access to that data don’t obtain information from people who pay cash, explains Kersten McManamon, marketing manager at City of Edina. “There are gaps in all of it, so we’re able to use [Placer.ai] as another tool in our toolbox to piece the data together to give us a more accurate picture,” she says.

Many commonly used programs and platforms, such as Google, Canva, Grammarly and more, have released their own generative AI products. As for what park and recreation professionals most look forward to seeing, NPP’s survey shows Microsoft’s enterprise AI, Copilot, is the most anticipated. “So basically, imagine AI woven into your entire Office suite,” says Bhatt. “Now you can, from your Teams meeting, record the transcription, get a summary in Word, create charts and tables in Excel, and then use that data to create a PowerPoint slide deck for you, which can then be sent to your contacts using Copilot’s help to draft the email. It’s really integrating what you already do with the AI on the back-end, all within the Microsoft Office space.”

The Good, the Bad and the Unknown

Like any tool, AI has the potential to produce both positive and negative effects depending on its use. Unlike most tools, the landscape of AI is changing so rapidly that it can be difficult to anticipate issues, and many outcomes of the widespread use of AI remain yet to be seen — potentially leading to unpredictable and far-reaching consequences. However, many of the drawbacks of AI can be addressed and extraordinary advantages harnessed when it’s employed thoughtfully and judiciously. Here are some current areas to consider when using AI:

Accuracy and Authenticity

If this article were written by AI, would you be able to tell? According to tooltester.com, an average of 53 percent of people can’t tell that ChatGPT content was generated by AI.

According to NPP’s survey, just more than half of park and recreation professionals cited accuracy of information as a top concern keeping them from using AI. Concurrently, about 75 percent of respondents said they hope to use AI to assist with research.

When it comes to the content Pitti and his team receive from ChatGPT, “it’s not perfect,” he says. “It requires a little bit of review, and [staff] know they’re accountable for that.”

Using generated content as a starting point can be a great timesaver when working with general information or topics you’re very familiar with. However, users and consumers of AI should be wary of misinformation — either shared inadvertently or worse, for malicious purposes, such as deepfakes. “We have to be mindful of not rushing to judgment the moment we see something because it is going to get harder and harder to know what is true versus what is fake,” says Bhatt.

One question regarding accuracy that arose for Vetter and his team is how long data remains representative. “What level of decision do you want to make based off of it?” he asks. “Oftentimes…we’re not as nimble as we want to be. If you’re getting a set of data that you want to use for a budget request, how long do you have to wait before that is submitted or approved? It might be your next fiscal cycle, and then at that point, are you looking back [and asking], ‘Did my data change? Did my users change?’ With Placer.ai, we can keep those data sets up to date by regularly refreshing the data.”

Privacy

AI is shaped by data collected from its users and others. While many companies take steps to protect privacy, it’s no secret that some are not as careful and that breaches can happen even when security measures are put in place.

When City of Edina began using Placer.ai, staff members’ primary concern was protecting patrons’ privacy, especially regarding location data gathered from cellphones. “I think one flag that kept going off for [staff] is, ‘That feels invasive,’” says McManamon. However, with this particular platform, data is provided in the aggregate to maintain anonymity. “There’s no identifying information about who’s driving down the street at any given time…. It’s completely anonymous,” she explains.

In addition to data privacy concerns, there’s also worry about intellectual property and copyright infringement, given that generative AI learns and draws from information originating from people. While using AI can be a quick way to develop the structure of an idea, a balance needs to be maintained with human review and revision to ensure the final result has not been plagiarized.

Job Displacement

With any technological development, concerns about job loss due to automation emerge. “I’d certainly push back on that,” says Pitti. “The American economy didn’t lose jobs when we went from shovels to bulldozers. We didn’t lose jobs when we went from typewriters to computers. We didn’t start hiring fewer rec professionals when we came out with new recreation software. I don’t think that [AI] is going to result in anything that’s contrary to what those industrial transformations have looked like. So, I hope that would maybe cool some of the fears and hesitations to use something like this, because, ultimately, we’re here to provide a better experience to our residents, and I think this could do that.”

While abrupt large-scale job loss may not be imminent, those who are familiar and comfortable with AI will be better prepared for the future working landscape. “It’s not that AI will take your job — it’s that someone using AI will take the job of somebody not using AI,” says Bhatt. To adapt, Bhatt emphasizes the significance of mastering what he calls “prompt engineering.” “[It’s] thinking how to give prompts to AI in a way that gets you to the outcomes you want, because the interfaces may change, but the ability to think will stay,” he explains.

Bias

While AI is sometimes romanticized to seem “more than human,” it is, in fact, created by humans, learning from humans and utilized by humans. At every stage of human intervention, AI is inheriting our biases. For example, the New York Times article, “Who Is Making Sure the A.I. Machines Aren’t Racist?” highlights how AI “is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.”

As users of AI, we can’t control who is creating it or the mass amounts of information it draws from — but we can shape who within our sphere is invited and encouraged to participate in its use. The output reflects the input, and for this reason, agencies should ensure their AI users represent various groups and identities. “Within any one industry alone, there’s a lot more groupthink, which can result in an echo chamber. So, to me, AI as a field is one especially where we need to learn from those who are not in the [park and recreation] field, because the implications matter to everybody,” says Bhatt. “Ultimately, it’s about people. I would much rather have a cross-industry, cross-functional intersectionality of groups, interests, ages, backgrounds that are created, that become…think tanks.”

Even after these steps are taken on the user end, it can’t be assumed that what AI is producing is bias-free. In fact, it’s likely to yield — and, in turn, perpetuate — the most common results rather than the most relevant or needed. “So, to counter the built-in bias, when our agencies look at this from hiring to the information they use, to the images for the marketing collateral they design, they have to be intentional about not taking what’s most easily available to them,” says Bhatt. “Because that will be the lowest hanging fruit and, often, the most stereotypical representation.”

Interpersonal Connections

In a field dedicated to fostering connections, combating isolation and loneliness, and enhancing well-being by celebrating the humanity of our communities, what do we risk by leaning further into technology?

For Pitti, the time AI saves him and his colleagues on tasks allows them to focus on building connections with the community they serve. “The intention of using technology like this…is to remove some minutia…to allow my staff to focus on the more human elements of their job,” he says. “When our…staff is using it to draft their memos or their blurbs for brochures, they can take the 30 or 45 minutes back from that and go and interact with our different afternoon fitness programs, or they can take a little bit of time…to talk to somebody on the phone — versus sending a rushed email — and make sure there’s a little extra personal touch there.”

Ultimately, a balance must be maintained. Bhatt warns that too much reliance on AI could lead to people who are not accustomed to face-to-face interaction. “How do we ensure [our staff] is still culturally sensitive, is still empathetic, is still inclusive in what they do?” he asks. “What I really want to emphasize is DEAI — putting the DE [diversity, equity] in AI, so we’re always talking innovation, but through the lens of inclusion.”

Policies and Norms

Creating policies and norms around AI can help to identify guardrails that mitigate some of the risks of using it. However, creating a “safe zone” is no easy task when the target is constantly moving.

One resource we can turn to for guidance is the Blueprint for an AI Bill of Rights, released by the White House Office of Science and Technology Policy. The blueprint identifies five principles to “help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.”

Even with guidance, it’s important to continually assess how AI is being used within your department and whether changes to your practices are needed. “Twelve months ago, there was no ChatGPT available to the public. Imagine in 12 months how much and how fast things have changed,” says Bhatt. “So, any policy you [have], you have to be willing to tweak it on an ongoing basis…because the rate and scale of change with AI is exponentially faster than anything we’ve seen before.”

Another reason for discussing policies and norms is for transparency and consistency. Many staff may already be using AI for their work unbeknownst to their leadership. “As opposed to having one or two people doing it on the side, institutionalize the process so that it becomes a culture where you see AI as complementary and not competitive,” says Bhatt.

Keeping Up and Looking Ahead

While some park and recreation agencies have begun to use AI for tasks like administrative functions and marketing materials, many look forward to the help it could provide with social media content creation, customer service, security and maintenance, and program development, NPP’s survey shows. “You have fewer people doing that work faster, and then spending more time reviewing and creatively iterating and building versus starting from scratch,” says Bhatt. “If [we look] five years out, [I see] long-term planning, design, architecture, capital planning — all those pieces are going to be impacted because AI can take in all the designs around the world, look at your prompts and create designs that you would never even think of.”

For park and recreation agencies looking to dip their toes into the world of AI, Pitti’s advice is to jump in and try it. “Type in something as simple as, ‘Build me a job description for the position that I have coming up this summer’ and understand what it can put out there. And then experiment with it. Take an hour of your day, plug some different things in — it will not be time wasted whatsoever,” he says.

“Change is a constant. We can either be forced to change, or we can be a force for change,” says Bhatt. “[We’re] going to have to spend time reskilling and training staff on prompt engineering, AI tools, machine learning, all the rest. And [we’re] going to have to be willing to be nimble and change constantly while being mindful of the inherent bias that exists in AI. And if we [do that], I think we will come out just fine.”

Lindsay Collins is Managing Editor of Parks & Recreation magazine.