Feed aggregator

The Drop Times: Orchestration is the Message

Drupal Planet -

Dries Buytaert has made a clear and timely case: orchestration is no longer a supporting layer in software architecture. It is becoming the hub where business logic resides, workflows are developed, and decisions are executed. This shift elevates orchestration tools from optional utilities to essential infrastructure. For those involved in building and maintaining digital platforms, this is not just a new idea. It is a new foundation.

The impact on existing platforms, including Drupal, is significant. As orchestration becomes the layer where integration, automation, and intelligence reside, every platform must reconsider its position within a broader network of systems. Drupal is well-suited to operate as a content and data hub; however, it must evolve to function as part of a distributed ecosystem, rather than assuming a central or controlling role. This requires architectural flexibility and a willingness to adapt.

What matters now is how the community responds. The orchestration layer is becoming the connective tissue of digital operations. That demands shared standards, openness, and collaboration. If this is where modern software systems come together, then the values behind it will shape how inclusive, resilient, and extensible those systems can be. Dries has shown where things are heading. The responsibility to build in that direction belongs to all of us.

Before we close, I'd like to extend a quick invitation. On Wednesday, November 5, we're holding our monthly TDT Townhall, an open planning meeting where we share progress, shape priorities, and listen to the community. If you're aligned with our mission to expand Drupal’s reach and want to contribute ideas around content, outreach, or technology, we’d love to have you on board. It’s a one-hour session, fully open, and you’re welcome to listen or bring something to the table. Join us on Google Meet: https://meet.google.com/vrw-naam-ire

TutorialCase StudyDiscover DrupalOrganization NewsDrupal CommunityFree SoftwareEvent

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now. To get timely updates, follow us on LinkedIn, Twitter, Bluesky, and Facebook. You can also join us on Drupal Slack at #thedroptimes.

Thank you.
Sincerely,
Alka Elizabeth,
Sub-editor, The DropTimes.

Ramsalt Lab: How Ramsalt and artificial intelligence transformed “secret” public documents

Drupal Planet -

How Ramsalt and artificial intelligence transformed “secret” public documents Yngve W. Bergheim CEO Thomas Andre Karlsen Developer 03.11.2025

The public sector is overflowing with knowledge – from reports to evaluations. The challenge has long been to find this information in the sea of ​​PDFs. Ramsalt Lab has been central to the development of Kudos – a national knowledge portal. By implementing groundbreaking AI-based document analysis, we have helped turn a chaotic document base into a goldmine of searchable insight.

The public sector produces enormous amounts of knowledge – reports, studies, evaluations and analyses. But where does all this knowledge go? Too often it ends up in digital drawers, hard to find and impossible to search.

This is the problem Kudos (Knowledge Documents in the Public Sector) solves.

Ramsalt have been a key technical partner in the development of this groundbreaking service for Directorate for Public Administration and Financial Management (DFØ). Our developer, Thomas Andre Karlsen, has been a key member of the team that built the technical engine that powers Kudos.

Ramsalt Lab is a digitalization partner with Directorate for Administration and Financial Management The contract has a value of up to 150 million NOK over a 10-year period and Kudos is one of several projects Ramsalt is doing for the directorate.

Kudos is a joint search solution that brings together knowledge and management documents from ministries, directorates and government agencies in one place. The service, which is a collaboration between DFØ and the National Library, aims to streamline the entire knowledge process – from production to sharing and reuse.

The Challenge: A Bunch of PDFs

Building a portal for tens of thousands of documents (at the time of writing, over 40 000!) is a big task in itself. But the real challenge lies not only in the volume.It is in the metadata.

Many of the documents that are retrieved (often via "scraping" of various public websites) lack basic information:

  • Who actually wrote the report?
  • Who was the client?
  • What are the most important keywords?
  • Can I get a good summary?

A document without good metadata is almost as bad as a lost document. This is where the magic – and Ramsalt’s contribution – comes in.

The solution: AI that cleans, analyzes and enriches

To solve the metadata tangle, Ramsalt's developer Thomas Andre Karlsen has been central to building an advancedAI-based document analysis tool inthe heart of Kudos.

This tool is not just a simple tagging function. Here's how it works:

  1. Analyse: When a new document is uploaded to Kudos, the first few pages are sent to an AI language model (such as GPT-4o and GPT-5).
  2. The model reads andunderstandsthe content. It identifies and extracts:
    • A more descriptive and searchable title.
    • The actual client (actor).
    • The individual authors.
    • Relevant keywords and topics to which the document can be linked.
  3. The AI ​​is also writing a completely new, concisesummaryof the document.

The result is that documents that were previously black boxes suddenly become data-rich, perfectly indexed, and extremely easy to find for users. This saves countless hours of manual work and dramatically improves the quality of the entire database.

Ask questions about knowledge

Ramsalt's contributions don't stop there. The team has also experimented with implementing aLLM search engine, RAG pipeline.

This allows users to "talk" to the database. Instead of just searching for a word, one can ask a question like:"What is the criticism of the Health Platform?"

The system will then find the most relevant documents in the Kudos database, read them, and then generate a fact-based answer for the user, complete with source references. This is a completely new way to extract precise information from a massive knowledge base.

A gold mine for the state, researchers and journalists

Like the newspaper Media 24 has pointed out, Kudos is a gold mine for anyone who needs to know what the state knows.

Public employees can easily find relevant knowledge from other organizations and avoid ordering reports that already exist. Researchers gain access to a huge dataset for their analyses. Journalists can use the service to uncover connections and dig into public administration.

At Ramsalt, we pride ourselves on delivering technology that has real societal value. The Kudos project is a prime example of how we use our expertise in Drupal, data mining, and artificial intelligence to build robust and intelligent solutions for the public sector.

Does your business have large amounts of data that are difficult to utilize? Do you need help structuring, enriching and making your information accessible? Contact us at Ramsalt for a chat about how we can use AI and smart search solutions to transform your data into valuable knowledge.

Tag1 Insights: Drupal CMS 2.0 Performance Testing

Drupal Planet -

Take Away:

Back at the end of 2024, Tag1 added performance testing to Drupal CMS 1.0 prior to its release. We then did a comparison between Drupal CMS and WordPress in which we dug into out-of-the-box performance between the two, and Drupal CMS came out pretty well. Now Drupal CMS is preparing for its 2.0 release, including a new theme, Mercury, and Drupal Canvas enabled by default. In preparation for this release we updated our performance tests, and wrote a new one for Drupal CMS’s new site template, Byte.

Gander: PHPUnit Performance Testing for Drupal

Drupal core’s performance tests allow PHPUnit assertions to be made for both back and front end performance metrics, including the number and size of CSS and JavaScript files, as well as the number of database queries and cache operations. Between these metrics, we get a high level overview of how a site performs. Additionally, timings from tests, which can vary based on external factors and aren’t precise enough to assert against exact values within a PHPUnit test, are sent to an Open Telemetry Grafana/Tempo dashboard. Gander is used to test Drupal Core, Drupal CMS and site templates, contributed modules such as the popular redirect module, and real sites via Drupal Test Traits integration, such as london.gov.uk

We added performance testing while Drupal CMS 2.0 was in alpha and Drupal Canvas was in release candidate, and that helped uncover a few issues.

Front End Performance

Drupal CMS’s new Mercury theme is based on Tailwind CSS, which allows for a lot of customization without having to write new CSS. While this should allow Drupal CMS theming to be very flexible, it does involve serving quite a lot of CSS out of the box.

When testing the Byte site template, we wrote a performance test that covers one of the more common front end performance problems that sites face.

Drupal’s CSS and JavaScript aggregation combines smaller files into minified larger files, to reduce HTTP requests and filesize. This is an effective strategy for most Drupal libraries and components, which often include many small CSS and JavaScript files attached to specific page elements, and which may only appear on specific pages or for users with certain permissions, etc. Serving these files on demand reduces unused CSS and JS when those elements aren’t there.

However, when a large CSS or JavaScript file is included on every page (or most pages), that file can be duplicated between different asset aggregates, meaning the same visitor downloads it over and over again if they visit multiple different pages with slightly different combinations of libraries. Our test shows over 2mb of CSS being downloaded across six pages, each page has around 350kb of CSS individually.

We filed an a issue and (Merge Request) MR against the Mercury theme to exclude the main.min.css file from minification (because it’s already minified, avoiding wasted CPU and memory trying to minify and already minified file) and aggregation, so that only a single copy is downloaded per visitor. This issue has already been committed and the improvement should be visible in the performance test for Byte once there’s a new release.

While we were looking for the source of those large CSS files in Chromium devtools, we also noticed some chained HTTP requests for both fonts and JavaScript files and opened issues to document both. When the browser has to first parse a CSS or JavaScript file before loading fonts or another file, this requires at least two round trips from the browser before the chained request can be served, which can significantly affect page loading performance.

Byte also includes a webform newsletter signup on the front page. We noticed that the Webform module attaches a details library to every page showing a webform , whether the webform will render a details element or not. Because the details library depends on jQuery, this adds around 100kb of JavaScript for anonymous users that otherwise might not be needed. This discovery is an example of how adding performance tests for Drupal CMS can test not only Drupal CMS itself, but also many of Drupal’s most popular contributed modules, finding performance issues that can affect many sites in the wild.

Canvas

Our original Drupal CMS 1.0 tests cover both anonymous and authenticated visitors. For Drupal CMS 2.0 we noticed that authenticated visitor page requests required many more database queries and cache operations than Drupal CMS 1.0. We tracked this down to the Canvas module which sets max_age: 0 when rendering its ComponentTreeItemList field in some circumstances, disabling the dynamic page cache for any requests that render Canvas’ page entity type. We also noticed that the tree rendering itself is quite expensive although this may become less of an issue once render caching is fixed.

These were the only backend performance issues we noticed, so assuming they’re fixed prior release, back end performance should be broadly similar between Drupal CMS 1.0 and 2.0.

Conclusion

These findings show how important it is to validate performance before releasing code to production, so that unexpected regressions in application or browser performance can be caught and fixed before reaching site visitors. At the time of writing, several of the issues we opened already had MRs attached or had already been committed by the Drupal CMS team. Drupal’s Gander testing framework, originally developed by Tag1, provides an ideal mechanism to add continual testing with repeatable results to any kind of Drupal project.

Keep Your Drupal Sites Fast and Reliable

Performance testing isn’t just a step in development, it’s the foundation of a seamless user experience. Tag1 helps engineering, product, and marketing teams ensure that sites are fast, stable, and ready to scale. Using performance monitoring solutions like Gander, we make performance enterprise-ready so your sites stay smooth, secure, and always available.

Lear more about website performance management and let us know how we can help!

Image by Lalmch from pixabay.

drunomics: Through the (DrupalCon) Looking Glass

Drupal Planet -

Through the (DrupalCon) Looking Glass jeremy.chinqui… Fri, 10/31/2025 - 07:24 When DrupalCon Vienna was announced at DrupalCon Barcelona, my colleague, Oliver Berndt, turned to me immediately and said “next year will be expensive”. Yes it was for drunomics, but it was also a major opportunity for us, as the drunomics team came together to help in many ways. As a result of many volunteering activities, drunomics achieved Gold Certified Drupal Partner status and we helped out in ways that benefitted us in unexpected ways.

Good Bot. Bad Bot. Gray Bot.

Phase II Technology -

Good Bot. Bad Bot. Gray Bot. cloos Thu, 10/30/2025 - 15:59 Summary Organizations that once dealt with predictable search engine crawlers now face an
explosion of AI-driven bot traffic that follows entirely different rules. Understanding modern bot traffic requires
moving beyond simple “allow” or “block” decisions to a more nuanced framework that acknowledges different categories of bots
require different management strategies. For AI bots, binary solutions don’t work: Unlike traditional attacks where the solution is simply to block malicious traffic, health organizations often need some level of AI bot access to maintain discoverability and medical authority in an AI powered world. Promo Image AI in practice page graphics (3).gif

Synthetic Personas: The Next Evolution in Audience Intelligence

Phase II Technology -

Synthetic Personas: The Next Evolution in Audience Intelligence cloos Thu, 10/30/2025 - 15:26

Sometimes even our most reliable tools need a refresh.

Think of it like a new haircut or an updated wardrobe. Still you, just a little more current and reimagined for who you’ve become. That’s where we are with personas.

For decades, personas have helped teams understand their users and see the world through their eyes. They’ve been our shorthand for empathy and conversation starters in product planning meetings. But over time, many personas have become forgotten static snapshots — relics of who our users used to be.

In a world where behavior shifts by the quarter, not the decade, we need personas that can evolve just as fast.

The Limits of Traditional Personas

Personas first took shape in the 1990s, when software design began centering around people instead of technology. Alan Cooper, one of UX’s early pioneers, created them to help product teams stop designing for themselves and start designing for their users.
It worked. Personas brought real human goals and frustrations into design conversations. They made abstract users feel real. But traditional personas also came with limitations:

  • They capture behavior at a single point in time.
  • They’re often based on small data sets or limited research.
  • Once created, they sit untouched as products (and their users) evolve.

 

Traditional personas are like well worn photos pinned to the wall; meaningful, but frozen in time. And as our users keep moving, we need a way to keep up.

Innovation in Practice

Synthetic personas are a bridge between human insight and applied intelligence. 

They combine the designer’s pursuit of empathy with the precision of AI-driven data, helping organizations better understand and engage their audiences. By blending creativity with computation, we’re building systems that not only inform design but elevate it, making customer understanding smarter, faster, and more adaptive. It’s putting AI into practice to help our clients innovate with confidence and purpose.

What Are Synthetic Personas?

Synthetic personas are AI-generated user archetypes that evolve over time as they learn from data. Instead of representing a fixed snapshot of a single user, they draw from real world data sources such as:

  • User research and interview transcripts
  • Web analytics and behavioral data
  • Interaction logs and feedback patterns

Think of them as a user you can actually talk to. You can ask them what they value, how they might interpret a navigation label, or which tone of voice feels more trustworthy. They’re a virtual audience in our back pocket helping us explore ideas. They won’t replace the insights you get from a conversation with real users, but they give you a head start and a way to test early hypotheses before you ever schedule a usability session.

When to Use Them

Phase2 is experimenting with synthetic personas to augment our existing processes, not replace them. Here’s how they help:

  • Testing navigation and labels. While creating a new information architecture, we use synthetic personas to simulate how different audiences might interpret navigation terms or categories.
  • Exploring tone and perception. By generating personas for distinct audience segments, we can check whether copy feels clear, credible, too technical or too casual and helping us adjust before usability testing.
  • Supporting rapid brainstorming. During early design sprints, synthetic personas act as conversation partners: reacting to prompts, evaluating flows, or prioritizing needs in real time.

Synthetic personas don’t replace real user validation, rather they  help us get to the right questions faster, so we can make the most of our user research sessions.

 

The Drawbacks (and Why They Matter)

Synthetic personas aren’t perfect. Here are some things to keep in mind:

  • Bias in, bias out. Like any AI tool, they reflect the data they’re trained on, and that data can include bias or gaps.
  • Outdated data. If not refreshed regularly (with user research), they can quickly become out of touch with real world behavior.
  • False confidence. Because they sound convincingly human, it’s easy to forget they’re not.

That’s why the most important skill in working with synthetic personas isn’t technical, it’s having good judgement. The real risk isn’t bad data so much as it is blind trust.

The Future of Understanding Users

Synthetic personas depend on real human research. Their strength lies in speed and scalability, but their accuracy depends on the same thing traditional personas have always needed: real people.

Empathy and evidence aren’t opposites, they’re partners. Synthetic personas give teams the ability to combine human curiosity with AI-driven learning, building products that respond more intelligently to real world behavior: faster, smarter, and a little more adaptive than before.

Publication Date Thu, 10/30/2025 - 15:26 Rachel Broardt UX Strategist Featured Blog Post? No Has this blog post been deprecated? No Summary Synthetic personas are a bridge between human insight and applied intelligence. They combine the designer’s pursuit of empathy with the precision of AI-driven data, helping organizations better understand and engage their audiences. By blending creativity with computation, we’re building systems that not only inform design but elevate it, making customer understanding smarter, faster, and more adaptive. Topic User Experience Synthetic-Profiles-Banner.png Promo Image

Drupal AI Initiative: Drupal AI development progress week 41 and 43

Drupal Planet -

This summary will cover three weeks instead of the bi-weekly progress report, and it will be a little bit different. Since we were very busy with the Driesnote for DrupalCon, the release of AI and AI Agents 1.2.0 (yay!), we were mostly focusing on stability fixes.

DrupalCon Vienna also happened and personally for me also PHP Longhorn in Austin. DrupalCon gave us an opportunity to meet in person, regroup and plan ahead for the 2.0 release. So we will cover that as well in the progress reports.

For me personally it was a crazy event compared to other DrupalCon’s I have been to. Many people to talk to, and many people I wanted to talk to, but never got the time to do it. 

We did prepare the demo for the DriesNote and it's one of the demos that I personally actually have been the most comfortable with sharing. Some of the demos that get recorded are on the level of something we strive for, rather than what is there now., The actual output of the Canvas AI for the examples in the DriesNote was actually over 50% on the reliability where you could almost just use it, and most of the rest created a version that just needed minor tweaking. This is based on a fairly strict criteria on who components should be placed, images should be picked and copies should be written.

Aidan Foster from Foster Interactive, who was one of the main contributors to the demo, did a follow up LinkedIn Post that you should not miss.

And if you do not believe me - you can run the demo yourself.

AI Context is out in dev version

Well it has been out for some while, but we wanted to introduce it with the DriesNote. The idea is that the AI Context or Context Control Center (name TBD) is the central point for any context your Drupal site will need. Both for AI or via MCP.

Right now it's focusing heavily on agents, but in the future it would also be usable in Automators, translations or anything that needs to have a stricter control on how to generate via AI. This project has been driven by Salsa Digital in general and Ahmed Jabar in particular, who spent weekends to have it ready before the DriesNote. A huge thanks to them!

Try it out and help out in: https://www.drupal.org/project/ai_context 

Prompt library used in AI Translate

In 1.2.0 we have added a prompt library. The initial implementation was AI Content Suggestions, but right before the release we also added an implementation into the AI Translate submodule.

This means that the translation prompts are now being managed via the prompt library and can be reused in the future for other translation tasks that could be added into for instance AI CKEditor or AI Automators.

Webform Agent can be tested

One of the things I wanted to demo in Vienna included showing off some kind of new agent and how you could use that agent together with MCP and agent-to-agent communication. Webform was a clear candidate for it. The demo included being able to create webforms from free text or even ugly hand drawn sketches, and then via MCP connect to a VAPI agent and have that agent be able to call someone and have an AI agent survey the webform over voice and then save the submission.

We ended up deciding to put the agent in a module, even if it's still very rough around the edges. You can find it at https://www.drupal.org/project/ai_webform_agent. Nick Opris is putting a lot of effort in moving it into the Tool API and making it more stable.

Flag added for Tools and Structure combinations

After testing different providers, we came to the conclusion that there are providers that do not allow the combination of using Tools/Function Calling and asking for a structured output.

Because of that we have added a flag where the providers can update their status to tell that they are able to do this.

For AI Agents we will then be able to figure out if this is possible or not, and add a feature where we can run another call on the finished loop, to structure the output.

Planning 2.0

A lot of the time was put into planning a way forward to the 2.0.0 release. Some things are already decided or were decided in Vienna.

This includes:

  • A huge refactor of the AI Automators, so it works with multiple automators per field.
  • A huge refactor of the code to follow PHPStan level 7 and some more standardizations.
  • Add a lot more testing to the modules that will stay in there.
  • Moving AI Agents runner into AI Core. It is such a common pattern, that any third party module should have the possibility to run an agent as part of its workflow. This means that AI Agents will either be deprecated for 2.0 or be a pure GUI module.
  • Use Tool API as the main way of writing function calls. Since these will be possible to reuse them for ECA, VBO or MCP (many three abbreviations). It is still not decided if executable function calls are deprecated for 2.0, but we would recommend anyone to use Tool API for any tool going forward.
  • Remove some of the submodules out, since that will make release iteration simpler, both for those modules that become contrib modules, but also for the AI Core module.
  • Remove the AI Translate module into a contrib module. There are multiple translation modules that solve different problems and we should not gatekeep a specific solution for it.
  • Remove the AI Search module into a contrib module. This module will then be possible to develop at its own speed independent of AI Core releases.
  • Remove the Field Widget Actions module into a contrib module. This module only exists in the AI module because it was a brainchild of doing widgets for the Automators, but since it's not directly connected to AI, it's being moved out.
  • Remove the AI Validations module into a contrib module. This module is an extension of the Field Validations module, rather than the AI module, so it made little sense to have it in the AI module.
  • Remove the AI Content Suggestions module into a contrib module. This module is a nice easy to install module to showcase what AI can do for you, but there are many different content modules, and we should not gatekeep this as well.
  • Remove the AI Logging module, possibly into a contrib module (do you want to help manage?). We now have an AI Observability module in the AI Core, that will just play nicer, both with Drupal's internal logging, but also external tools like Open Telemetry and DataDog.

Be sure to keep an eye out here or on LinkedIn to stay up to date with the latest developments. Visit the AI Initiative home page for ways to connect, events and webinars.

Centarro: Belgrade, a Drupal Commerce Theme, Evolves to Match Real-World Requirements

Drupal Planet -

Belgrade 3.0.0 is the most significant update to our Drupal Commerce theme since its initial release. Modified based on the needs of clients and the functionality we have implemented for real-world eCommerce implementations again and again, it goes beyond just a normal theme.

Rather than being a rigid starter kit, Belgrade is now firmly centered around Drupal Commerce Kickstart, showcasing best practices and the full capabilities of the platform. You can use it as an example when designing and building your own Drupal Commerce themes.

Modernized login and user account pages

We brought a more contemporary design to user authentication pages that matches patterns consistently used across client projects. The login, registration, and password reset forms now feature a centered, card-based layout with:

  • Optional background image - Add visual interest to authentication pages with a  customizable background
  • Custom styling - Modern form design matching your checkout experience, moving away from the default Drupal primary tabs

The whole experience can be toggled.

Implementing cleaner login forms is a normal step for more Drupal Commerce sites to help them look fresh and modern. Belgrade 3.0.0 makes this step much easier.

Read more

Web Wash: Using AI Automators (Drupal AI) in Drupal CMS

Drupal Planet -

Artificial intelligence continues to reshape content management systems, and Drupal is embracing this transformation through the Drupal AI initiative.

The video above demonstrates how to use AI Automators in Drupal CMS to automate content generation, transcribe audio files, and streamline editorial workflows.

This guide covers AI Automators within the Drupal AI module suite. Learn to set up basic and chained automators, transcribe audio, integrate AI into CKEditor, and auto-generate social media posts.

Drupal.org blog: GitLab issue migration: immediate changes

Drupal Planet -

At DrupalCon Vienna, we opened the opt-in period for module maintainers to volunteer their modules to be migrated to GitLab issues. You can opt yours in at #3409678: Opt-in GitLab issues.

That means that we will have some projects with issues on Drupal.org and some other projects with their issues on GitLab during this transition period. Due to this, some things will change in our current systems.

Changes to Drupal.org

The issue cockpit on each project's page will go away. The current issue cockpit that will see in projects reads data from our internal issues, but as projects transition to GitLab issues this block no longer makes sense. We will replace this for a simple "Issues" link that will take you to the right issue queue, whether it is GitLab or Drupal.org.

Parent and related issues will now be connected via a full URL. It used to be connected via entity reference fields, pointing at internal issues. Now that we have two systems for this, these will be links, that once rendered will bring the metadata information, like title and issue status, as we did with internal issues. We will be able to link both Drupal.org and GitLab issues into these new fields, and the old entity reference fields will go away.

What's next?

We ask project maintainers to help us at the Drupal Association iterate and improve on this process as we migrate more and more projects. We know that change can take time to be adopted, and we are really excited to help project maintainers move their issues into GitLab.

There are almost 200 projects with more than 1000 issues, and around 2000 projects with more than 100. 
Drupal "core" has more than 115K issues.

The roadmap will be (in each iteration, we will address feedback, fix bugs...):

  • Migrate projects that opted in
  • Make this the default for new projects
  • Migrate low-risk, low-usage, and/or sandbox projects
  • Migrate remaining projects, excluding a few selected high-volume, high-risk
  • Migrate the rest of the projects, including core

We are very excited about this transition, and we truly think it will be an improvement to the contribution experience. We are also thankful to the community for helping us with this.

Drupal Association blog: Drupal to Enhance Security and Developer Tools thanks to Sovereign Tech Fund Investment

Drupal Planet -

The Drupal Association has received €201,000 from the Sovereign Tech Fund to enhance Drupal's GitLab infrastructure, with a focus on security, testing efficiency, and design tools. This funding will enable critical improvements including completing the migration of Drupal's security issue management system to GitLab, optimizing CI/CD testing across thousands of repositories, and implementing new tools for UX and design contributors.

This continues the Sovereign Tech Fund’s support of Drupal. In 2023, the Sovereign Tech Fund funded major work to support the move from Drupal.org's homebuilt contribution tools to the GitLab platform. 

The self-hosted GitLab instance at git.drupalcode.org is maintained by the Drupal Association and used by contributors all over the globe. In 2024, there were 7,276 unique individuals using git.drupalcode.org to contribute to 69,204 issues. These contributors represent an international community of users who support critical Drupal installations serving the public.

The additional funding will enable the Drupal Association to further enhance our use of GitLab in the following key areas:

  • Migrate security issue management to GitLab
    Our existing security portal is running on an end-of-life version of Drupal, under extended support, and isn't integrated with our modern developer tools. Finalizing the move of our security team issue management to GitLab will provide the security team with better tools and make it easier to onboard new members.
     
  • Optimize CI/CD testing
    We currently support testing for tens of thousands of repositories in the Drupal ecosystem. By further optimizing our testing configuration, we can reduce redundant tests, improve performance, and potentially expand to new types of testing like visual and performance regression testing.
     
  • Improve tools for UX and Design contributors
    We'll implement better project management templates and explore integrating with design tools like Storybook and/or Figma to support our UX and Design contributors—who will then have the tools they need to help make Drupal easier, more intuitive, and more beautiful than ever. .
     
  • Share our CI strategy with other open source projects
    We'll document and share our approach to managing CI testing across thousands of repositories to help other large open source projects facing similar challenges.
     

The work commissioned by the Sovereign Tech Fund will not only enable us to advance strategically, driving meaningful progress and making a positive impact within the Drupal community but also strengthen the open source platform for users everywhere.

We are grateful to the Sovereign Tech Fund for this collaboration. This funding reflects their continued dedication to open source and their confidence in the Drupal Association and the community's ability to innovate and ensure the future of web development.

Pages

Subscribe to www.hazelbecker.com aggregator