Certified Development Lifecycle & Deployment Designer!


I’m a bit behind schedule posting this, but at the start of May, I tackled the Development Lifecycle and Deployment Designer certification. Preparing for this one had nicely aligned with some of my recent work at my job, where we’re refining our approach to DevOps as we shift to a more iterative methodology for many of our projects.  As such, a lot of the content for this exam was already relevant and fresh in my mind.  I still reviewed the study guide and trail mix, but I’ve found that any certification exam is much more straightforward when I’ve actually had reason to work hands on with the subject matter— especially when that hands on work isn’t just my own test cases in a developer org.

One thing I appreciate about the exams on the System Architect side of the ‘Journey to CTA’ pyramid is that much of the content is applicable for working with technology beyond the Salesforce platform as well.   There are nuances to the best practices for a dev lifecycle on the Salesforce platform, but in general, the recommendations, methodologies, and governance strategies that apply for developing on Salesforce would apply for other environments as well.

If you’re reading this blog, you may be wondering: “what should I study if I want to pass the Development Lifecycle and Deployment Designer certification exam???”.  I found the exam guide that Salesforce provides to be a great place to start, and a fairly accurate breakdown of the content areas of the exam.  The trailmix is a good place to start with documentation, but I definitely recommend taking some time to get hands on with the ANT Force.com Migration Tool as you prepare.  I recommend familiarizing yourself with what you can (and CAN’T) do with scripting, and what tools are available for different types of testing (and the best ways to leverage the assorted testing tools).  It’s also a good plan to dive into the different types of governance structures and roles that come into play, and what types of governance are recommended for which circumstances.  All of this material is introduced in the trailmix, but it’s worth your time to dig into the related links and additional reference materials—not only to ensure that you pass the exam, but also because it’s useful knowledge for managing application lifecycles on the platform and beyond!

Why Environment Management Matters

As I continue along the journey to CTA, I’ve been preparing to take the Development Life Cycle and Deployment certification exam.   This exam covers a range of methodologies, tools, and best practices for managing the cycle of building and deploying new features on the Salesforce platform.  It’s an important area of knowledge for any architect working with large teams and/or complex projects and implementations to ensure that new development doesn’t break existing functionality, without sacrificing the ease of declarative modifications that administrators sometimes just implement in production.  Like several of the other certifications that comprise the System Architect domain, while this certification does center Salesforce tools, it requires a knowledge of best practices and methodologies that are more broadly applicable in other systems and technology implementations as well.

In my role as a solutions architect up until this point, I haven’t had to put much thought into dev ops— sure, I know to build in a sandbox and then deploy to production using change sets, the metadata api, or SalesforceDX— but having a deeper strategy around environment management hadn’t been necessary with the small teams I’d been working with.  Or, so I thought.    Then, one day, someone mistakenly refreshed a sandbox that had months worth of work in it that had been planned to deploy to production in the next week— and all of a sudden, having an environment management strategy even for a seemingly small project became critical.

What is Environment Management?

Environment management is the plan and strategy for where your new application or features are built and tested before they are deployed to production.  An environment management strategy could be very simple, with changes only occurring in production and their release being managed with profiles and permission sets, or it can be complex, incorporating numerous sandboxes, version control, or continuous integration as changes move from initial development through to deployment.  Your environment management plan should cover what work happens in what environment, what the process is for promoting changes from one environment to the next, and what the schedule is for refreshing each environment.  Taking the time to think through your environment management approach will help ensure that changes don’t unintentionally overwrite someone else’s work or break production functionality.

Environment Management Approaches

Don’t be this guy. Just… don’t. 

Even on what appears to be a small or simple project, you need to have an environment management strategy.  Your strategy might just be “this project only requires simple changes that are safe to make in production (reports, dashboards, list views), so a sandbox is not necessary”— but the point is that you need to make that decision with each project, and communicate the plan to anyone else that is engaged in your development process.

If your project involves any amount of automation– be it workflow, process builder, flow, or code, you need to plan on working in a sandbox.  Developing your changes in a sandbox and then deploying to production will allow you to test and ensure that your new automation behaves as you intended, without risking the integrity of your production data.  Salesforce makes it dangerously easy to make add automation in production via process builder or flow, but resist the temptation: test it in a sandbox first to help ensure that your shiny new feature won’t do anything unexpected when you release it.

If you’re working on a large team, possibly with multiple projects simultaneously in flight, you may require an environment landscape with numerous sandboxes, supporting individuals developers on each project, integrating project contributions into a single environment, and then performing QA and user acceptance testing before changes are ultimately deployed to production.   This sophisticated approach to environment and release management can allow for multiple branches and workstreams to be incorporated while maintaining the integrity of the production environment.

From https://developer.salesforce.com/blogs/developer-relations/2014/12/salesforce1-enterprise-environment-management.html

If you have multiple team members working on the same project, in the same sandbox, make sure that everyone is on the same page with the sandbox refresh schedule.  One approach can be to limit the access most developers and builders on the team have to the production environment and have all sandbox refreshes handled by the environment manager.  Of course, this can become cumbersome if different team members are working in different environments and need to refresh at different intervals.  A more scalable approach would be to use a version control system like Git, so that your source of truth can live in a repository that is safe from sandbox refreshes, instead of in the metadata of any individual environment.  That is now my preferred approach, especially as SalesforceDX makes it easier to pull and push metadata changes between production, sandbox, and scratch orgs.

Doesn’t environment management make things take twice as long?

Some folks are resistant to working within a robust environment management architecture because it can extend the amount of time it takes for changes to be available in production.  However, consider that an ounce of prevention is worth a pound of cure, and developing changes in non-production environments is the best strategy to ensure that you don’t break production functionality when adding new features.  Building and testing in production might seem like a faster idea— until you see how long it takes you to fix something when one of your production changes doesn’t go according to plan.  There are some changes that can safely happen just in production– think reports, dashboards, or list views– without having an impact on your production data— but for changes that impact your business logic, integrations, or architecture, the safest approach is to start in a sandbox.  The extra bit of time to manage that deployment process is worth the risk mitigation for your production environment.

TDX18, Here I come!

I’m currently 32,000 feet above Pennsylvania, en route to TrailheaDX, Salesforce’s developer conference.  I’ve reviewed the sessions, bookmarked my favorites (and backups, and back-backups) for each time slot to fill up my agenda for the next two days.  With so much great content to choose from, it helps to focus in on a few specific areas and learning goals for the conference.

This year, many of the projects and clients I’m working with are bringing sensitive data and processes onto Salesforce– HIPAA, FERPA, GDPR, and more.  I’ve been doing my own work with Shield, but I want to hear more about it from the experts, and I want to know more about identity, security, and compliance on the platform overall.  A few sessions I’ve bookmarked:

Scale Security at Your Company

GDPR Fundamentals and Einstein

Centralized Identity Management Across Multiple Orgs

The Future of Salesforce Security and Authentication

Beyond that, I definitely plan to stake out the product booth for platform encryption to get some questions answered about deterministic encryption.

My other focus this year is all about Einstein.  AI and machine learning has so much to offer any organization as they try to be smarter with their data and provide better experiences for their users, and I’m excited to dig in deeper with these tools.  A few sessions I’ve bookmarked here include:

Build Einstein: Multi-tenant, Multi-app, Machine Learning at Salesforce Scale

Salesforce Einstein Keynote: AI for CRM

Augment Me: Add Intelligence to Any Salesforce App with Einstein Discovery

Reality 2.0: Augmented and Made Smarter with the Salesforce Platform

These are just a handful of the sessions I’m looking forward to, and I’m sure there will be others that I stumble upon just by walking through the dev zone and seeing an interesting demo up in one of the theaters.  That’s one thing I appreciate about TDX— you can just walk in to most sessions without needing to pre-register!  This gives an extra bit of flexibility to enjoy the conference, connect with other community members, and let your learning guide you!

Will you be attending #TDX18?  What are your goals for the conference?


Shield Platform Encryption and Queries

If you’ve started working with Shield Platform Encryption, you’ve probably encountered some of the particular ways that it limits how you can interact with data in encrypted fields. A significant limitation is that you cannot use encrypted fields for filtering or sorting in queries– either directly through SOQL or in reports or list views.  After you stop throwing your computer at a wall and think about it, of course, it makes sense: Salesforce Shield Platform Encryption uses probabilistic encryption, which means that the ciphertext for a field is different every time, even if the plaintext value is the same.  Queries and reports are accessing the data layer that is still encrypted, which means there’s no way to identify records based on the decrypted values.  (note: there is a deterministic encryption beta currently available, which does support filtering and sorting on encrypted fields because the ciphertext will be the same for the same plaintext values.  You can contact Salesforce support to get added to the beta).

Not being able to query on encrypted fields can be a problem for many organizations, especially if they need to look up records based on data which must be encrypted for compliance reasons.  For instance, an organization tracking visit data with HIPAA requirements may need to update existing admission records with a discharge date, or insert a new visit if no matching admission record already exists.  HIPAA mandates that visit dates must be encrypted at rest, which means developers and architects have to get creative to create a query for the right record.

As I was puzzling on the problem, I realized that if the data was initialized in an Apex string variable, I’d be able to work with the plaintext value, and that beyond that, I could create a synthetic key using the plaintext values in a map for locating the correct record.  While I wouldn’t want to do this on anything running asynchronously (which could result in the plaintext values being stored, somewhere, decrypted at rest), it should be a secure solution for classes being executed immediately by a trigger.

Here’s the rough code that I put together for the proof of concept.  It isn’t production ready (method is directly in the trigger, needs try/catch blocks, test code coverage, etc), but I hope that it’s enough to give an idea of how to approach this:

trigger ImportStagingTrigger on Import_Staging_Object__c (before insert) {
    map<string, Import_Staging_Object__c> isomap = new Map<string, Import_Staging_Object__c>();
    Clinical_Event__c clinevent = new Clinical_Event__c();
    string cekey;
    string newcekey;
    list cetoupdate = new list();
    list cetoinsert = new list();
    map<string, Clinical_Event__c> cemap = new Map<string, Clinical_Event__c>();
    list celist = new list(
        [SELECT Admission_Date__c, Discharge_Date__c, Attending_Physician__c, Department_of_Service__c 
         FROM Clinical_Event__c 
         WHERE Discharge_Date_populated__c = FALSE]);
    for(Clinical_Event__c ce : celist){
        If(ce.Admission_Date__c != null && ce.Attending_Physician__c != null && ce.Department_of_Service__c != null){
        Integer dayofyear = ce.Admission_Date__c.dayofYear();
        cekey = ce.Attending_Physician__c+ce.Department_of_Service__c+dayofyear;
        system.debug('the clinical event key is '+cekey );
        cemap.put(cekey, ce);
    for(Import_Staging_Object__c isoce : Trigger.new){
        Integer isodayofyear = isoce.Admission_Date__c.dayofYear();
        newcekey = isoce.Attending_Physician__c + isoce.Department_of_Service__c +isodayofyear;
        system.debug('new ce key is '+newcekey);
            system.debug('Clinical event found');
            clinevent = cemap.get(newcekey);
            clinevent.Discharge_Date__c = isoce.Discharge_Date__c;
            clinevent.Discharge_Date_populated__c = TRUE;
        } else {
            system.debug('no clinical events found');
            clinevent.Admission_Date__c = isoce.Admission_Date__c;
            clinevent.Attending_Physician__c = isoce.Attending_Physician__c;
            clinevent.Department_of_Service__c = isoce.Department_of_Service__c;
                 if(isoce.Discharge_Date__c != NULL){
                      clinevent.Discharge_Date__c = isoce.Discharge_Date__c;
                      clinevent.Discharge_Date_populated__c = TRUE;
            cemap.put(newcekey, clinevent);
            update cetoupdate;
            insert cetoinsert;

Because we cannot query at all on encrypted fields (not even, for instance, Discharge_Date__c = NULL), I recommend using a field on the record that isn’t encrypted to limit your query, lest at some point a large data set results in heap size problems for your map.  In my case, I created a checkbox field, Discharge_Date_populated__c which my class sets to TRUE whenever it populates the Discharge_Date__c on a record.  I’m doing this within the class, because declarative automation cannot be triggered based on encrypted fields.     You might also be able to limit records based on created date or other values that do not require encryption.

Keep in mind that if you’re creating a synthetic key string for data in your map, you will also need to create a synthetic key using the corresponding values for the data you are inserting, in order to match the new insert/update to existing records in your map.

What strategies and techniques are you using to work with probabilistically encrypted data?


Welcome, and thank you

Hello, and welcome to Accidental Techie to Technical Architect!

I wouldn’t be where I am today in the Salesforce ecosystem if it weren’t for the amazing community generating content and sharing their knowledge.  This site is my attempt to give back as I continue to learn and grow on my journey to Certified Technical Architect.    The thing that amazes me about the Salesforce platform is the way that we all support each other’s development, and I’m excited to have a place to collect and share everything that I’m learning and exploring.

Come for the technical walkthroughs and strategic explorations, stay for the personal stories and reflections on building a technical career as a transgender person with a non-technical background!