Analyzing and defining business processes is covered in our design and process courses. Let's expand on a couple of these issues. In the change management outline item, there's a tip that says, quality is a process not a product. As a working cloud architect, you'll almost never have a job where you design and implement the technical solution and then you're done. Instead, you'll be required to stay on the project for a period after implementation and launch, to make sure that solution continues to run and stabilizes. Anticipating that, you'll want to develop process checks, and operational knobs to ensure that the solution can be monitored and adjusted during the stabilization period. So, the business processes might look like generating reports, and it might include weekly or monthly meetings to analyze the reports. It might include procedures that explains to administrators how to take action. Another item is customer success management. Only in the past few years has customer success been broken out from support. The difference is that support is there to make sure a solution continues to operate as it was designed and built. But what if the business circumstances change, or the technology changes, and the current solution is starting to drift off from the actual business needs? Customer success is about making sure that the solution continues to evolve, and remains effective and efficient for the current requirements, and uses the latest and most efficient technologies and methods. If you can exchange a server oriented architecture for a serverless service, you no longer have to be concerned about instance overhead, just the SLAs of the service. There are different hardware, CPU architecture supported in different zones. For example Sandy Bridge Aswell, Broad Well, Ivy Bridge, Skylight and so forth. There are a lot of benchmark comparisons online, and there open source CPU measurement tools that you can use. It suggested that you test your application and workload in different zones to see what differences the hardware in the zone might make. There are new persistent disk features and options released periodically. So, check the documentation online for the latest figures and details. People often assume that a persistent disk is just a hard disk, when it has different features and capabilities. Consider potential I/O burst, if you've planned on I/O based on an average, and the actual disk usage is bursty, the disk could be under provision for dealing with the bursts. Persistent disk performance scales with the size of the disk. So, if you trade up to a larger disk in your design, revisit the performance to avoid overcapacity. If you trade disk size down, check for under capacity. Potential I/Os may be constrained by CPU. An n1-standard-4 can drive a PD-SSD at capacity, and an n1-standard-16 can drive a local SSD at capacity. There are open source disk measurement tools available, and it's recommended that you run tests on your application at various sizes of data and loads, to understand not just capacity, but how the solution handle stress and overload conditions. In general, internal IPs are faster than external IPs. Even VM to VM in the same zone over external IPs can be about one gigabit per second versus 8.5 gigabits per second for internal IPs. Here's another tip. Does a default instance in Google Compute Engine know the difference between an internal and an external IP? No, it doesn't. Standard instances have one interface, and all traffic arrives on that interface. So, that means that external IP isn't added to the hardware interface to the local IP. For sizing network capacity, consider potential I/O burst. If you've planned on I/Os based on an average, and the actual traffic is bursty, it could be underprovision. Network capacity scales with the number of cores. So, if you change out the number of cores and the VM's in your design, revisit the network capacity to make sure the design does not over or under provision. What are some of the factors you should consider when estimating workload? Most common of course are the characteristics of communication and messaging, how often a request or transactions or operations performed, and how big is the payload of the request. Other factors include state changes and methods that divide work into parts, such as sharding, distributing work to multiple workers, such as pipelines, or aggregate work for efficiency, such as batching.