Introduction
This Toolkit for First Release provides example checklists and questions to help you perform a delivery of the no-code application (the MVP release), as soon as possible to users after gaining final acceptance. The critical steps here are to prepare for deployment, provision all of the necessary environments, gain final acceptance from end users, and roll-out all necessary user support and enablement. Refer to the supporting material around both the key Roles and Responsibilities and Application Matrix which will be referenced throughout the guidance.
The outcome of this stage should be the no-code app that has passed all final acceptance and validation steps and has been deployed to production for use by end-users. After this you will continue to gather feedback and continuously evolve the app, all while the app is delivering value through everyday use and adoption by end-users.
Stage Pre-requisites/Inputs
The primary inputs to this stage are as follows:
Application build and integration testing has been completed for MVP scope.
Feedback from users captured in your backlog and being managed as part of roadmap of post-MVP updates.
Compliance checks have been successfully completed with internal and external groups.
Security checks have been completed with IT/Operations.
Data governance reviews have been completed with Data Owners and the Data Governance Group.
The planning and preparation for the activities in the First Release should likely have started much earlier – these can have a long lead time and may require involvement from a variety of internal functions or groups. However, this stage will ensure every step is complete and you have gained the final approvals needed to deploy into production.
Automating Deployment
In the prior stage we discussed the benefits of Governance automation to help maintain velocity of releases. We are now going to discuss Deployment automation, which is also key as you start scaling the number of developers or teams that are building no-code apps. This automation will accelerate individual and team productivity and reduce much of the complexity and error-prone manual deployment activities that can make releases challenging – all without slowing you down.
This is part of a broader trend in modern software development referred to as ‘Continuous Deployment’, which seeks to automate nearly every step of the deployment process all the way to production. Modern no-code platforms should support this type of Continuous Deployment model with their delivery approach, to allow for rapidly moving no-code applications through environments (which we will discuss later in this Stage). When a No-code Creator or Architect needs to move a No-code app to a different environment, this should be ideally triggered from the no-code platform’s IDE. This allows moves of the No-code features quickly and seamlessly across environments in an ‘on-demand’ fashion.
The Deployment automation may also be linked to a version control system such as Subversion (SVN), which is a free and open-source version control system. This allows you to have more granular control over which specific sets of changes should be transferred between environments; this is especially important with larger teams to manage / minimize the number of conflicts as multiple-developers are modifying the same app or reusing components.
In larger or more mature environments, your IT team may specify a set of different tools (e.g. Jenkins, GitLab, etc), as part of a defined Continuous Integration / Continuous Delivery (CI/CD) strategy; they may want to integrate the no-code platform into an overall CI/CD process. To support this, no-code vendors will often provide an optional command line interface (CLI), which allows for the vendor’s IDE and development steps to be merged into an overall CI/CD pipeline.
Provisioned Environments
There can be a broad range of requirements when it comes to the number and types of environments used by your development team. In a very simple DIY team scenario, you may start with just 2-3 environments; but in more advanced applications (with multiple teams and more complex or mission-critical requirements, you could have 8+ environments being used simultaneously. The Deployment automation discussed in the prior section becomes highly important as you start to increase the number of environments, as the workflow across environments will introduce complexity and create opportunities for manual errors.
Following are two illustrations of two different environment configurations and deployment scenarios:
Simple scenario:
Advanced scenario flow:
To assist with defining the correct number and types of environments, we will review the most common environments and classify them by type.
Continuous Integration (CI) Environments |
|
Continuous Delivery (CD) Environments |
|
Extended Environments |
|
As you assess the complexity of the application (using the Application Matrix), and also size the project resources needed for the project team (during the Project Assignment phase in Stage 4, this will help you identify the initial set of CI and CD environments needed for MVP. Note however, as your application evolves over time, the increased features or technical complexity may also necessitate revisiting the assumptions on environments; this will be discussed as part of the Application Audit stage (Stage 12). This may lead to adding or optimizing environments to support your changed needs.
The formation of a CoE will also likely lead to revisiting the size and number of environments needed to support broader, scaled use of no-code across numerous teams and applications. The use of additional deployment automation will also likely be evaluated at this time as well, as the topics of automation and environments are closely linked.
Data Migration
Data is an essential part of your no-code application, as virtually every app will create, read, update, or delete (a.k.a. “CRUD” operations), data that exists somewhere. While sometimes all your required data may already be present in an existing SaaS or LOB applications (allowing you to simply integrate to access the data), it is quite common you will identify data ‘gaps’ – i.e. parts of data not maintained by any new or existing system. In these cases, you may find the data only exists in the form of manually maintained user lists or spreadsheets. Another common situation that causes data gaps is the retirement of legacy systems by the no-code application; the system that held the data may no longer exist as part of the new target solution. In these types of scenarios, you must plan for data migration as part of your first release.
Putting proper emphasis on data migration provides the following important benefits:
Preserves Existing Data: Organizations often have valuable data stored in existing systems, spreadsheets or databases. Migrating this data ensures continuity and prevents loss during the transition to the new app. Without proper data migration, historical records, customer profiles, transaction history could be lost, impacting business operations.
Greater Data Consistency and Accuracy: Migrating data allows for data cleansing and validation. Inconsistent or erroneous data can be corrected before it enters the new no-code app. Accurate data also ensures reliable reporting and analytics it the new app, and decision-making relies on trustworthy information.
Enables a Seamless User Experience: Users expect a smooth transition, and migrating existing data ensures users can access their information without interruption. If users encounter missing or incorrect data, they may resist using the new app.
The following are some tips and recommended best practices on data migration to consider as part of your planning for your First Release.
Plan Ahead:
Depending upon the scope and complexity of your data migration needs, you may decide to document and formalize a detailed data migration plan for your project. This will help ensure you have defined and communicated to all stakeholders the scope, timeline, and resources needed for migrating data.
As the complexity of your data migration increases, consider formally identifying and engaging all stakeholders who are impacted by the migration. This will, of course, include those responsible for directly performing the migration, but also includes all dependent teams/groups who may be impacted by the conversion to new data, and the users/teams responsible for verifying the results.
Carefully assess your existing data and understand the data landscape – this helps you identify the full diversity of existing data sources, formats, and quality. Also categorize data and determine critical vs. non-critical data; keep in mind that not all data is equally critical to migrate.
Data Cleansing and Validation:
Cleansing of data is key to avoiding incorrect app behavior – even a high-quality no-code app will perform badly (or not at all), if it relies upon bad data. You should remove data duplicates, inconsistencies, and irrelevant records and also validate data against business rules, before migrating this data into the new no-code app.
Ensure data accuracy - validate against reference data (e.g., postal codes, product codes), to make sure that going forward there is consistency and accuracy of the data in the new no-code app.
Data should be reviewed and cleansed by Data Stewards responsible for the data, who deeply understand the data and understand the data variants and exceptions. However, even with highly knowledgeable Data Stewards, the data cleansing can take quite some time. In many cases, data conflicts cannot be resolved automatically and require decisions from the Data Steward to resolve (e.g. the case of resolving two customers with same full name, email and phone, but different gender and date of birth - data owners have to decide which gender or date of birth is correct). You will need to plan and allocate sufficient time for these activities to prevent it becoming a project bottleneck.
Data Transformation and Mapping:
This step is where much of the ‘heavy lifting’ of data migration is performed. You will need to transform data formats, and convert legacy formats to match the new no-code app. This also involves the mapping of data fields, to ensure data consistency across systems by matching entities and their attributes between source system and target no-code app.
Decide on method for data migration for each entity depending on volume of data, criticality and its format. If possible (especially for large quantities of data), data migration should be automated to avoid human error and also speed the first release deployment. The no-code platform itself will often provide tools for automating data transformation and mapping, using tools like how you might construct a real-time integration. (One key consideration between First Release data migration and the ongoing real-time integration performed by the system is that you should usually plan for schedule data migration during off-peak hours, so that it minimizes impact to operational systems.)
Clearly define ownership for data migration tasks as part of the overall no-code deployment. Commonly this is the No-code team itself, who may perform the migration from source system(s); or they may engage responsible additional parties for this job (e.g. technical support of a vendor of the source system, or perhaps another team/group within the organization who owns the source data).
-
When mapping and transforming data, sequencing and uniqueness of data is important. Define clear ordering and dependencies that are essential to proper integrity of data in the new no-code application. E.g.
-
Import master data first (lookup data - currencies, countries, zip codes, etc.);
Import main entities in logical sequence (e.g. import Accounts prior to Opportunities);
Define rules that enforce uniqueness of the record for each entity before import (e.g. insert or update based on certain key criteria, which may be one field or combination of fields used together); and
Tag imported data with an import timestamp tag to ease troubleshooting later if the sequence is causing unintended or incorrect behavior.
-
Choose the Right Migration Approach:
-
There are several common approaches to data migration including:
-
Direct migration: Move data directly from source to target.
Staged migration: Migrate data in phases (e.g., by department or module).
Parallel run: Run old and new systems in parallel for validation.
-
Regardless of the approach, it is highly recommended to try importing a small number of records (e.g. <5 records first), and only after success, proceed with the broader migration.
Clearly define ownership for data migration tasks as part of the overall no-code deployment. Commonly this is the No-code team itself, who may perform the migration from source system(s); or they may engage responsible additional parties for this job (e.g. technical support of a vendor of the source system, or perhaps another team/group within the organization who owns the source data).
Consider planning for a few waves of data migration if you intend to use two systems simultaneously (e.g. piloting the new no-code application while working in old tool at the same time).
Test, Monitor and Validate:
Test: You should plan for creating test scenarios (as part of User Acceptance Testing), which tests data integrity, completeness, and accuracy along with the no-code app itself. End-users who are performing the UAT should validate the migrated and transformed data so that it is correct and meets business and operational requirements.
Monitor: Even with the best planning and testing, you will miss defects in the data. You should also consider setting up rules/alerts for data anomalies, which can catch errors in production before end-users spot them.
Validate: If you are running a parallel operation between an existing and new no-code application, then you may be able to perform some validation post-migration by comparing migrated data/outputs with original data/outputs.
While these above practices can often be overlooked (or at least underestimated), effective data migration ensures a successful first release of your no-code application. Following these best practices can help maintain data integrity and minimize disruptions during the transition to the new no-code application.
Final User Acceptance
One of the unique hallmarks of no-code development (compared to traditional software development), is gathering user feedback early and continuously; this makes the final end-user approval less of a scary event (e.g. will it fail to reach signoff), and ensures most of the buy-in and alignment has happened along the way. However, final user acceptance test (UAT) and stakeholder approval must still be attained prior to the release. Note however the focus of the Final UAT should be to validate the highest-level business requirements (as defined in the Business Use Case), and obtain approval to release; this is NOT meant to be a substitute for lower-level functional or user scenario testing (which happens during the Prototype-to-MVP stage).
Following is a sample UAT Framework for a no-code project:
1. Define the UAT strategy and scope |
|
2. Prepare the UAT environment |
|
3. Define test cases and scenarios |
|
4. Execute test cases and scenarios |
|
5. Verify defect fixes |
|
6. Obtain sign-off from stakeholders |
|
7. Perform post-release validation |
|
Some additional tips and considerations:
The first time you conduct a no-code UAT, defining this framework may take some moderate amount of planning and effort. However, this is a process that will be streamlined as you build additional apps; and once you have started building a CoE, the UAT framework is a very good opportunity for standardization within the CoE and across the broad organization.
This UAT framework should be tailored to meet the specific needs of a no-code project, based on the size and complexity of the solution being developed, the number of stakeholders involved, and the testing resources available. The framework should be documented and communicated to all relevant stakeholders and updated as needed throughout the testing phase of the project. The CoE can also play a key role in determining how to scale or tailor the UAT effort.
-
There are typically key roles that are involved in the UAT testing process. These roles include:
-
Stakeholders: These are the end-users, subject matter experts, or process owners who are responsible for defining the business requirements and objectives of the system. They are also the primary stakeholders who will be using the system, and they play a critical role in the UAT process by reviewing and validating the system's functionality and usability.
Test Coordinator/Manager: This is the person who is responsible for overseeing the UAT process and ensuring it is conducted efficiently and effectively. The test coordinator is responsible for defining the UAT plan, scheduling and coordinating testing activities, and ensuring all necessary resources are in place. (this role may sometimes be located in a CoE if that exists to leverage consistency and repeatability of practices across the organization)
Testers: These are the individuals who perform the actual testing activities. Testers are typically end-users or subject matter experts (borrowed from their business unit), who have a good understanding of the business processes and workflows that the system supports. It is important to have scheduled and secured their time early in the project (during the Project Assignment stage), as their time may be limited, since pulling front-line personnel away from the business can have significant business impact.
No-code Creators: No-code Creators will be involved in UAT to address any defects or issues that are identified during testing. They may work closely with the testers and business stakeholders to understand the root cause of defects and to develop appropriate solutions.
-
Final User Acceptance testing should be done in a PRE-PROD or similar environment, that closely mirrors PROD. This will help eliminate issues which are due to data or environmental inconsistencies. It will also reflect a performance that should be indicative of the final PROD application performance.
User Support & Enablement
User Support & Enablement is a critical step in the process – and sometimes easily overlooked or rushed in the haste to get the no-code app released to end users. While this topic is not necessarily unique to no-code, and there are many resources out there on preparing user support and training, there are some tips you should consider that are especially important with no-code apps.
How |
|
Who |
|
When |
|
One final reminder – the training and enablement effort does not stop with the Go-Live release of the MVP app! Your no-code will continue to evolve and mature, and you need to support it throughout the lifecycle. A few suggestions:
For a period after the initial release, consider the use of your trainers (or power users), to provide ongoing support to users, addressing their questions, concerns, and troubleshooting issues. They may offer individual or group support sessions, respond to user inquiries through email or chat, or maintain a knowledge base with frequently asked questions.
Adopt a model of continuously re-training the users based on the ‘Everyday Delivery’ approach (which will be discussed further in the next set of Stages). Providing more frequent - but smaller – chunks of enablement content is often more effective with an app that is frequently being updated. It won’t be needed to set up training every time when new functionality gets deployed; it will depend on the size and the impact of the change.
For frequently changing apps we would suggest setting up regular monthly or bi-weekly ‘what’s new’ updates to the user population (this may highlight short videos, walk-throughs, or even new training classes for more significant updates).
Finally, for complex enterprise-grade apps, it is suggested to consider using certifications and testing to confirm users’ ability to fully utilize the system specifically. Having the ability to measure the completeness and effectiveness of the training is critical for applications that may impact key mission-critical processes.
This is a lot to take on – however, for organizations who have adopted a CoE, the CoE can be of tremendous assistance in coordinating and supporting the above activities. It’s too easy for the DIY project teams to get pulled back into working on new apps or different responsibilities, so having the CoE ensure some level of consistency is highly helpful.