How can community groups safely use AI?
Government and community groups need to get together to issue sensible guidance fast
Guidance is needed but there is no point if it is totally restrictive
With the rise of AI, community groups are keen to use it. However, in New Zealand, just like in many other places, there are privacy laws and health information requirements that must be met. In additon, there are issues surrounding national data sovereignty and Māori data sovereignty that need to be taken into account.
It does not make any sense to me for individual community groups to work on this on their own. The likely outcome is that they will err on the side of caution and create overly restrictive policies. It would be a shame if community groups were unable to use AI tools to serve their clients while the private sector races ahead with its implementation in a wide range of situations.
Government and community groups should get together to issue generic guidance
I quickly looked but could not see anything on the net about government and community groups working together on this issue in NZ. Perhaps something is happening; if so, great. If not, it would be wonderful if a group could come together, sort this out, and provide sensible, realistic guidance that does not just kick for touch and attempt to tightly restrict AI usage by community groups. It does not have to be a mammoth exercise; there is plenty of expertise in government around this issue, as government agencies are all grappling with it.
Overly tight guidance may just promote illicit and uncontrolled use.
The issue with excessively tight AI usage guidance at present is that it may lead to illicit usage, as overstretched community group workers recognise the power of AI Detecting such illicit usage in a busy organisation is difficult.
Doing a privacy assessment is technical work
The requirements for conducting a necessary Privacy Impact Assessment can be quite technical for a community group and somewhat off-putting. I have outlined ChatGPT’s version of what you have to do below and also below is an analysis of whether one of the systems I believe people should consider, Compliant ChatGPT, meets those requirements.
IP leakage
People are naturally worried about whether using AI will lead to their IP being leaked; of course, this is also an important concern regarding Māori and data sovereignty. You can get some protection regarding this if you have a paid subscription to ChatGPT and turn off the setting that allows it to train on your usage of the model. If guidance were developed, it should look at this issue amongst others.
Confidential information
Another concern is confidential information, which poses a significant issue in healthcare, although positive steps are being taken. For instance, Compliant ChatGPT is a system that processes the input you provide, eliminates all personal details, and then sends only a prompt without personal information to the main ChatGPT. When receiving a reply, it puts the personal information back in. If a group were to provide consolidated generic guidance to NZ community groups, it could work out whether Compliant ChatGPT complies with NZ privacy laws and standards. Below is an example illustrating how Compliant ChatGPT removes private information (this case study and the names in it are entirely fictitious). The words in blue have been removed from what was sent to ChatGPT. What is not sent onto ChatGPT is any names and addresses etc. Compliant ChatGPT gets its name from its HIPAA compliance. HIPAA is a set of standards for privacy in health information. Anyone can check Compliant ChatGPT out for free—just sign up.
Possible ways in which community organizations may be able to use AI
Here are some possible ways (use cases) in which community organizations might be able to use AI if they knew that the systems they were using met privacy and other requirements. If a group was looking at providing generic advice for NZ community organizations they could go through this list and make suggestions for which AI systems meet current technical and legal requirements if used for these different use cases.
Administration & Governance
Take, transcribe, and summarise meeting minutes
Draft board papers, policies, constitutions, AGM packs
Calendar / task automation and reminders
Internal & External Communications
Write or tailor emails, newsletters, press releases, and stakeholder updates
Generate social-media posts and schedules
Create direct-mail appeals or SMS campaigns with segment-specific wording
Presentation & Learning Materials
Produce slide decks, speaker notes, hand-outs, infographics
Convert dense reports into one-page briefs or poster summaries
Script short explainer videos or webinar content
Research & Insights
Rapid literature scans on community issues, policy changes, funding trends
Summarise academic papers or government consultations into plain language
Compile comparative tables (e.g., service models, regional statistics)
Grant-seeking & Fundraising
Draft grant applications and tailor them to funder criteria
Generate budgets or logic-model narratives from bullet-point inputs
Personalise donor-thank-you letters and end-of-year impact reports
Client & Case-work Support
Create intake forms or triage questionnaires
Draft personalised care plans or follow-up emails from case notes
Summarise multi-agency records into concise client snapshots (with privacy safeguards)
Data Cleaning & Impact Reporting
Harmonise spreadsheet formats, spot data gaps, generate simple charts
Turn service-usage logs into KPI dashboards and narrative stories
Translate raw survey results into plain-English highlights
Volunteer Management
Craft recruitment ads, onboarding packs, role descriptions
Auto-schedule shifts and send reminder texts
Summarise feedback forms after events
Policy & Advocacy
Draft consultation submissions to Parliament or local councils
Summarise proposed bills and identify clauses affecting the sector
Generate talking points or media Q&A sheets
Cultural & Language Services
Translate materials into / from Te Reo Māori, Samoan, Tongan, etc.
Provide plain-language or Easy-Read versions for accessibility
Suggest culturally appropriate phrasing or metaphors
Compliance & Risk
Prepare Charities annual return narratives and reports
Generate privacy-impact assessments or Health & Safety checklists
Monitor regulation updates (e.g., Incorporated Societies Act) and summarise changes
Learning & Staff Development
Create micro-learning modules, quizzes, and role-play scenarios
Summarise best-practice guides into step-by-step Standard Operating Procedures
Provide real-time language or writing feedback to staff
Partnership & Network Coordination
Draft MoUs, partnership proposals, meeting agendas
Summarise multi-partner email threads into action lists
Compile shared calendars or resource maps
Community Engagement & Events
Generate event plans, run-sheets, scripts
Tailor promotional blurbs for local radio, newspapers, and social channels
Summarise post-event surveys into insights and “next-year” recommendations
Tech Enablement & Support
Write step-by-step guides for using software (e.g., Xero, Office 365)
Generate FAQ chatbots for website visitors
Draft digital-literacy tips for staff and clients
Conclusion
It does not make any sense to have community groups all over the country trying to work out what is safe AI usage for what will be very similar use cases in many instances. It is simply a waste of resources. It would be great if a group could get together with a few government experts to develop generic guidance for all New Zealand community groups regarding common use cases. If this is already happening and you are aware of it, please let us know in the comments below. If not, it would be great for this to happen sooner rather than later.
Privacy and related assessment for use of AI
Below is what ChatGPT (4o 2025-05-27) thinks is needed for an assessment regarding the use of AI by a community group.
Privacy Impact Assessment (PIA): Before implementing any generative AI tool, conduct a thorough PIA to identify and mitigate potential privacy risks. This assessment should evaluate how the AI tool collects, uses, stores, and discloses personal health information.
Data Sovereignty: Ensure that patient data is stored and processed within New Zealand or in jurisdictions with comparable privacy protections. Using AI tools that store data overseas, particularly in countries without equivalent privacy laws, could breach New Zealand’s data sovereignty requirements.
Informed Consent: Obtain explicit, informed consent from patients before their data is processed by AI tools. Patients should be made aware of how their information will be used, stored, and protected.
Data Minimization and Anonymization: Only collect and process data that is necessary for the intended purpose. Where possible, de-identify or anonymize data to reduce privacy risks.
Human Oversight: Implement processes to ensure that outputs from AI tools are reviewed by qualified personnel before any decisions are made based on that information. This helps prevent errors and biases inherent in AI-generated content.
Vendor Due Diligence: Assess the AI tool provider’s privacy and security practices. Ensure they have robust measures in place to protect personal health information and that they comply with New Zealand’s privacy laws.
Whether Compliant ChatGPT meets these
ChatGPT’s (4o 2025-05-7) view in regard to these criteria and whether Compliant ChatGPT meets them.
1 Privacy-Impact Assessment (PIA)
No public statement of a pre-done PIA for NZ law. Any NZ provider would still need to run its own PIA under the Privacy Act 2020 and the Health Information Privacy Code.
Unmet → You must do your own PIA.
2 Data sovereignty / location
Site says data are kept only 24 h by default and can be made non-persistent, but it does not disclose where those 24-hour stores or the upstream OpenAI servers sit. Likely offshore (the service is US-centric HIPAA tooling).
Partially met. Safe only if your organisation is comfortable with offshore processing or negotiates a NZ-hosted option.
3 Informed consent
Tool provides no patient-facing consent workflow; that remains the customer’s responsibility.
Unmet at platform level → must be handled in your intake/consent process.
4 Data minimisation & anonymisation
Text containing PHI is tokenised/redacted client-side before it ever reaches OpenAI; original PHI is not stored.
Strongly met for textual prompts; you still need policies for file uploads & metadata.
5 Human oversight
Platform supplies AI output only; whether a clinician reviews that output is up to your workflow.
Partially met → depends on your SOPs.
6 Vendor due diligence / contractual controls
Offers a Business Associate Agreement (BAA) on paid plans and markets itself as “HIPAA-compliant.” No evidence of ISO 27001, SOC 2, NZISM, or explicit alignment with NZ privacy law.
Partially met. You would still need to review the BAA, security controls, breach-notification terms, etc.



If you have any experience with how community groups are using AI and the policies they are implementing, please let us know below.