Last Friday afternoon, after spending a week studying the topics required in detail, I went for the Integration Architecture Designer certification exam. It was quite an enjoyable exam to study for, despite the fact that the subject matter was tricky and required a more focused revision plan, particularly for someone who designs integrations rather than building them herself.
Studying for the Integration Architecture Designer exam
I’ve got myself into a bit of a study routine now since I’ve been ploughing my way up the pyramid. I start by:
- Reading through Salesforce’s Trailmix
- Taking each topic one at a time, making notes, drawing diagram versions of the notes
- Reading and re-reading my notes
- I didn’t do the superbadge, but I should have and I will be tackling it this week (I got scared of the Apex pre-requisite and fled!!)
- Finally, testing myself with practice questions on Quizlet (there are a few good mock tests on there)
- I also went back to the wonderful Maciej, for his testimony on this exam. Maciej, thank you again for your blog; it was such a help.
Did you want to know what you needed to study then? OK.
PS: If you’re a videos person, take a look at the Ladies Be Architects integration videos too.
The main topics of study (as per the Resource Guide) include:
Reasons to Back Salesforce Data Up
- Data corruption
- Human error / malice
- Preparation for a data migration exercise (including rollback)
- Archiving solution to reduce data volumes
- Replication of data to a data warehouse / Business Intelligence hive
- Taking snapshots of development versions (metadata)
Key Considerations for Backup:
- Types of backup (and use cases for each)
Backup Techniques (and considerations for each)
- Data Loader, Data Export, AppExchange or an API Solution?
- API Types: REST, SOAP, Metadata, Bulk
- The ability to retry any failures is really important
- Consider PK Chunking if you’re getting timeouts with the Bulk API
- REST and SOAP’s governor limits are separate from the Bulk API
- You can do 10,000 batches per 24hr period with the Bulk API – so use SOAP / REST if you need to preserve these limits for something else
- REST and SOAP is best used for backing up attachments
Backup Performance Optimisation
- Speed depends on
- No of records
- No of columns
- Field types (i.e. Rich Text, Long Text – these will slow your backup down)
- Network capacity
- API selected
- Query splitting: horizontally and vertically (I’d recommend reading up about this).
- Monitoring access via logs
Key Considerations for Recovery
- Key elements of a recovery plan:
- Scope (must align with the customer’s Disaster Recovery Plan)
- The version to be recovered
- Ease of restoration
- Fault tolerance
- Plan for minimal impact and disruption to the business
- Minimise data transformation
- Consider the time taken to recover and plan around it
- Automate as much of the restore as you can
- Minimise impact on current and future application design
- Automatically solve any common / expected roadblocks
- Provide error logs to enable manual intervention
- Types of restore
- Single record
- Logical Partial Restore
- Org Copy
- Restore processes – these are provided in the materials referenced in the resource guide
- Org copy
Use of Sandboxes in Backup and Restore Processes
- Full copy sandboxes should NEVER be used as a backup store because
- There is no guarantee of data integrity
- The copy isn’t point-in-time, as a back up is meant to be. So don’t let a CIO convince you that it’s a good idea.
Security of Data in Transit
- The usual Transport Layer Security provided by Salesforce
- I remember having a question that asked me to describe what the protocol is and I selected TLS (as this is now the replacement term for SSL)
- What to use if you are transporting data over an unsecured network – Base64. Read about Base64 encoding.
- Named Credentials are a key area of study – that one gave me a surprise
- Look into how you can securely store credentials without exposing them to naughty people
I could have really gone to town on this; Salesforce gives you a link to a website that will teach you all you need to know about what makes up an enterprise integration pattern. I spent the best part of a day studying and making notes about what each message type is, what they mean and how they work. I summarised it into a diagram to help me understand it better.
There is also a question that came up about the benefits of using an Enterprise Service Bus and middleware. Here are my takeaways from the integration patterns site – it gave me a much better understanding for all the Technical Process Reviews I’ll be doing soon.
Request – Reply
Typically when one system contacts another to send data and accompanying instructions, including a return address for the other system to send a response to.
2 types: Remote Procedure Call and Messaging Query.
I also learnt a great deal about the use of channels within these integrations, particularly in terms of the directions your data are travelling in:
- You’ll need to study Canvas; it’s a secure way of exposing another system’s UI within Salesforce
I regretted not spending more time studying the differences between the Enterprise and Partner WSDLs and their capabilities. The training courses you are directed to by the resource guide are very good at explaining why you need them. The tricky part is in understanding what they can also do as well as being a contract between Salesforce and other systems. Developers with API experience, however, should find this a breeze.
Swot up on your WSDLs.
- Some crossover with the Development Lifecycle and Deployment exam here – if you study the types of testing and sandboxes you’ll be ok.
Ways of Moving Data Around
- Knowing how to connect 2 orgs together – Salesforce-to-Salesforce – its capabilities and limitations will help you greatly.
- Copying data down into a sandbox – Full Copy copies everything and Partial Copy means you can apply a template of data objects and records to copy down
- Data Loader
- Middleware applications that handle ETL (Extract, Transform, Load)
The Beautiful APIs
Yucky for a non-developer, but really you just need to understand what is available and the differences between the APIs.
- Bulk – used for large data volumes (more about this in the Data Architecture exam)
- SOAP – synchronous (i.e. real-time) – most systems accept SOAP, and it processes XML only. You need a WSDL to use SOAP to integrate your systems.
- REST – lighter weight than SOAP; it can process XML and JSON. It doesn’t need a WSDL.
- Chatter REST API – just another protocol, but it exposes Chatter functions, such as posts, follows and comments
- Metadata – all of your fields, objects etc. You are more thoroughly tested about the Metadata API by the Deployment exam.
- Streaming – Publish / Subscribe
The way my husband describes SOAP vs REST, for example, is that SOAP is like a nightclub, with a bouncer and you need a pass to get in. REST is like a pub, you can just walk in as long as you’re not an idiot about it.
It helps to have some basic knowledge of authentication techniques; I am sure this will be a BIG topic for the Identity and Access Management credential. In the exam, I saw this being asked within the context of a mobile application.
Just know what you can and can’t use it for. You generally use it to test your integrations.
- Outbound messages are asynchronous but declarative.
- They are only available as actions for workflow rules.
- Admins can choose fields to send (and if they want to send a Session ID too) and specify an endpoint URL to send the data to
- They only work on one object at a time, so read the questions carefully. It’s often offered as an option but they’re testing how much you really know about the limitations of outbound messaging.
And, After All That…
My head hurt, a lot. So a hot bath and a celebratory prosecco was all this mummy needed to celebrate her victory. Until next time….and I wish you all the very best of luck with this exam. As ever, please tweet me @gemziebeth if you have feedback or comments, or even if this just helped you, a little. I love to hear from readers.