United Nations: Social Protection Knowledge, Powered by LLM
In an era where access to knowledge can transform lives and societies, the United Nations platform stood at a critical crossroads. With a vast repository of over 15,000 data resources on socialprotection.org, this platform represented a goldmine of insights that could shape policies, inform research, and improve social welfare systems worldwide.
The platform faced four interrelated challenges
- The manual categorization of new resources created bottlenecks, slowing down the very knowledge sharing that could accelerate social progress
- Researchers and practitioners lost precious time navigating through thousands of documents, often unable to find the specific insights needed for their work;
- As the platform grew, maintaining the high quality and reliability of content - crucial for evidence-based policymaking - became increasingly complex;
- Perhaps most critically, those who needed this knowledge most—stakeholders in low—and middle-income countries—faced significant barriers to accessing relevant information for their specific contexts.
These challenges weren't merely technical inconveniences. For the United Nations, they represented a direct threat to its mission of promoting inclusive growth and sustainable development. In regions where every policy decision can impact millions of lives, the inability to efficiently access and utilize vital social protection knowledge posed a significant obstacle to progress.
This case study explores how, by using LLM, the United Nations is transforming this knowledge management challenge into an opportunity to revolutionize how social protection information is shared, accessed, and implemented globally.
The Starting Point
The platform's limitations were clear: searches yielded inconsistent results, document categorization consumed valuable staff time, and scaling the system while maintaining quality posed significant challenges. These weren't mere inconveniences—they were barriers preventing vital information from reaching those who needed it most.
Vinta's response demonstrated both expertise and cautiousness. Rather than immediately diving into development, we proposed a focused two-week Product Discovery sprint. While the UN had already identified Large Language Models (LLMs) as a promising direction, success required careful planning and validation to ensure resources would be invested wisely.
The sprint was meticulously structured around four key objectives:
- Designing a scalable and robust LLM integration architecture that could grow with the platform's needs
- Prototyping an AI-powered search and categorization capabilities
- Developing a detailed, milestone-driven implementation timeline
- Identifying potential technical risks and establishing mitigation strategies early in the process
This systematic approach proved invaluable. The team avoided common AI implementation pitfalls that often lead to costly delays and revisions by testing core assumptions and validating key concepts before full-scale development.
Product Discovery: Merging Technical and Domain Expertise
Vinta structured a compact, specialized team to match the United Nations' needs:
- A Lead Developer brought deep AI and LLM expertise, which is essential for evaluating technical feasibility and designing the RAG pipeline architecture;
- A Principal Product Designer focused on making complex search and categorization functions intuitive for researchers and academics;
- A Product Manager tied everything together, balancing technical possibilities with the UN's organizational constraints.
This team worked directly with the Social Protection Department, the main stakeholder who would transform how researchers and policymakers access social protection knowledge worldwide. Working directly with their Product Owner meant every decision was validated against real operational needs. This mix of technical, design, and domain expertise proved crucial - the team could simultaneously tackle AI architecture challenges while ensuring the solution would work within the UN's existing workflows.
The two-week sprint was structured to maximize efficiency:
- Technical feasibility sessions in the mornings;
- Design workshops in the afternoons;
- Async feedback through Microsoft Teams between sessions.
Empowering the Platform with AI Capabilities
In the early stages of the sprint, our Principal Designer identified the platform's target users as academic researchers. Based on previous research and conversations with UN stakeholders, we gained further insights into their domain, interests, goals, daily challenges, and level of technical fluency.
We then mapped all the key steps, touchpoints, and goals of the user journey through an online workshop session with the Product Owner and Lead Engineer, providing valuable input to prioritize features for the roadmap and prototype built later on.
Prototyping an AI-Powered Solution
By running diligently selected design workshops and desk research, we prototyped a solution highlighting how the platform positioned and positively differentiated socialprotection.org from other high-profile academic research platforms. The chosen solution was continuously scrutinized throughout the sprint through daily async feedback from Microsoft Teams, their primary communication channel.
We made the sign-up process significantly more efficient by highlighting all the benefits, making it stress-free to create an active profile on their platform.
Our strategic additions, which ended in the final mockups, included advanced filtering options, AI-powered features, and an enriched content repository. The AI capabilities featured a summarization tool that facilitates scanning through search results across a wide range of resources—publications, reports, case studies, academic papers, data, and research tools—keeping stakeholders informed of the latest trends and advancements in the field.
We also proposed how AI-powered suggestions could enhance the experience through contextual reading lists, personalized recommendations, and automated categorization upon content submission. These features will significantly streamline content search and contribution, transforming an arduous manual task into a simple button click, thus increasing adoption and usage.
Boosting the data knowledge with RAG
Our solution also included a Technical Assessment report outlining an architecture based on a Retrieval-Augmented Generation (RAG) pipeline. Before Large Language Models were widely available, integrating AI into a product involved months of engineering work to collect data, build training and validation datasets, select models, and fine-tune them to achieve the desired results.
The final model or even models would later be integrated with the final product. From our experience, building and training custom models is a long and error-prone task which can often be expensive for most clients.
LLM offers a significant advantage for natural language-based products. With zero-shot learning, we can handle multiple tasks that traditionally require different models and extensive training, simplifying the product infrastructure. We proposed supercharging LLM's generative ability with our client's proprietary data to get contextual results with minimal engineering effort and low cost by leveraging RAG Pipelines.
Established technologies such as vector databases and embedding techniques are added to the pipeline to augment the model's knowledge base and produce custom responses without manual training. These advancements reduce development costs and time-to-market while enhancing user experience with more natural and contextually aware interactions.
Outcome
By the end of our engagement, we effectively transformed the original requirements document, which covered many functionalities on a higher level, into a narrower, strategically focused scope with actionable outputs and estimates of the development timeline. Our delivery has significantly empowered the socialprotection.org platform by enhancing its functionalities and focus on value.
By examining the user experience within the socialprotection.org platform, we could empathize with our target persona and craft a solution that leveraged AI to impact their workflow, considering their most significant pain points and jobs to be done.
We brought in Taxonomic Tagging powered by AI, a giant time saver for submitting new resources to their knowledge base. It makes search and categorization much more powerful.
As consultants, considering the cost and time they want to invest, we recommend the best tool to solve our clients' problems. In this case, we did not choose LLMs as the solution because AI is a hot topic but because they were the best tool for the job. LLMs have enabled applications that would be very costly to implement, so we are attentive to cases where we can use LLMs effectively in instances where they solve problems that traditional solutions would not be as efficient at solving.
These improvements will facilitate more effective research and collaboration within the community and ensure the platform remains a vital resource in social protection.