I can no longer upload to my WordPress site without resolving the issue which I suspect is caused by a change Ionos my website host has made. When I went to contact them they now require me to ring them and that was the straw for me.
I’ve been posting the articles to both WordPress and Medium, which is extra work will little payback as the WordPress site only gets a few hits a month and the articles on Medium get 100’s of reads each week, the down side is Medium is subscription based, but I would recommend you at least try it.
Here is the link and good luck with your Flutter projects:
Here’s my way, my influences, but feel free to follow any highway…
Naming
Resources are objects, things, so use a noun to specify a resource e.g clients and then use HTTP methods to define actions like get, modify, delete.
If you use verbs like send, get then you are falling into the Remote Procedure Call trap. RPC style endpoints are notorious for specifying the function in the URL e.g. /getClientNameById
Resource names should be plural as this is the most widely adopted approach:
/clients not /client /getClients /user/client
Unless there can only ever be one (0-1, exists or not) e.g. users/1/avatar.
HTTP Methods
Often call HTTP verbs are POST, GET, PUT, PATCH, DELETE and OPTIONS.
Considering the resource (Entity) life cycle can help you determine which HTTP methods to allow, in particular where it is stored, db, file and if it can be archived.
Patch & Put
Favor Patch for updating resources, use put when resources are replaced not updated e.g. binary documents.
Do not use Put to create new items.
Be aware that PUT does a complete overwrite of the data, it is a request to replace a resource, PATCH is a request to update part or all of it i.e. a new version.
The difference between the PUT and PATCH requests is reflected in the way the server processes the enclosed entity to modify the resource identified by the Request-URI.
In a PUT request, the enclosed entity is considered to be a modified version of the resource stored on the origin server, and the client is requesting that the stored version be replaced. With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version.
PATCH is less likely to have side affects when a resource is updated frequently because you only update the things that change, whereas PUT will update fields that have not changed with the values retrieved and if another request happens after the retrieval you will reset them. You can/should guard against this at the repository level by checking a resource version and invaliding the update.
Do not use ‘Put’ to create new items, use ‘Post’.
If barn 11 does not exist then PUT /barn/11 should return a 404 and message saying I couldn’t modify it because it doesn’t exist.
Options Verb
The OPTIONS method represents a request for information about the communication options available on the request/response chain identified by the Request-URI. This method allows the client to determine the options and/or requirements associated with a resource, or the capabilities of a server, without implying a resource action or initiating a resource retrieval.
A pre-flight options request is triggered by some CORS requests, see the section on CORS for more details.
However, if I do POST or a PUT and something is created, use 201. Tell me explicitly that the new object was created.
If I do PUT or a PATCH and nothing’s modified, return 304 Not Modified.
If I send the wrong data or use the wrong format request, return 400 Bad Request.
If I haven’t logged in or I sent an invalid auth token, return 401 Not Authorised.
If I try to do something I’m not allowed to do, 403 Forbidden.
404 if the object never existed, or it’s not there. Technically, you want to use the status code for Gone namely, 410 if it once existed, but 404 is traditionally Not Found. DO NOT use this when a GET returns no rows/result, it’s still a 200.
405 represents Method Not Allowed. This goes beyond Forbidden 403 and says, “Hey, you can’t do this for example, delete an object. If you try a different method, that would work.”
415 corresponds to Unsupported Media Type, for example if I request XML but you only support JSON.
Formatting Content
I follow the conventions in the { json;api } specification.
// Articles with fields title, body and author.
GET /articles?include=author&fields[articles]=title,body,author
HTTP/1.1 200 OK
Content-Type: application/vnd.api+json
{
"data": [{
"type": "articles",
"id": "1",
"attributes": {
"title": "JSON:API paints my bikeshed!",
"body": "The shortest article. Ever."
},
"relationships": {
"author": {
"data": {"id": "42", "type": "people"}
}
}
}]
}
The spec has matured overtime and specifies some of our important practices showing us how to format the negotiation between the client and server:
Relationships
Hypermedia
Pagination
Filtering
Sparse fields
Errors
Relationships (Related Resources)
Multiple related resources can be requested in a comma-separated list:
GET /articles/1?include=author,comments.author HTTP/1.1
Accept: application/vnd.api+json
In order to request resources related to other resources, a dot-separated path for each relationship name can be specified:
GET /articles/1?include=comments.author HTTP/1.1
Accept: application/vnd.api+json
To update a related resource include the resource as a relation in the PATCH e.g. request will update the author relationship of an article:
The term “hypermedia” was coined back in 1965 by Ted Nelson, and over the years has dominated the technology industry. Hypermedia, in its most basic sense is an extension of hypertext – something you may recognise from HTML.
Hypertext is essentially text that is written in a structured format and contains relationships to other objects via links.
Hypermedia is just an extension of the term hypertext, hypermedia includes images, video, audio, text, and links.
In a REST API, this means that your API is able to function similarly to a web page, providing the user with guidance on what type of content they can retrieve, or what actions they can perform, as well as the appropriate links to do so.
This in page guidance via links means that your clients do not need to remember much, they can just request a resource and check the response to see how to work with information provided, take appropriate actions, or access related information.
A good example of this is a client reading your site news only needs a single endpoint https://<site>.com/news.
The response would included all of the related articles and actions which you can change daily without coupling the client to news articles in any way.
Hypermedia can be express as links in a JSON API response:
The action links description the content that can be retrieved and the actions that can be performed by user in a response to their request.
This is powerful as it gives the server flexibility to change without breaking the interface with the client.
Use the Accept and Content-Type Headers
We tend to think, “I’m going to build a JSON REST API and it’s going to be awesome.” It works great, until you get that million?dollar customer who needs XML. Then you have to go back and refactor the entire API for this customer. That’s why you should build your API from the start with the ability to add content types in the future.
Give yourself the ability to support multiple specifications without worrying about breaking backward compatibility.
Incoming request may have an entity attached to it. To determine it’s type, server uses the HTTP request header Content-Type. Common content types are:
application/json
application/xml
text/plain
text/html
image/gif
image/jpeg
Similarly, to determine what type of representation is desired at client side, HTTP header ACCEPT is used. It will have one of the values as mentioned for Content-Type above.
Sparse Fields
Sparse fields are key to creating Api’s that can be used by many clients.
If we do not support sparse fields we will force all clients to get the full set of data which will grow we add more functionality, similar to large objects graphs created in our monolith applications.
Use a fieldsTYPE parameter to return only specific fields in the response on a per-type basis.
GET /articles?include=author&fields[articles]=title,body
Here we want articles objects to have fields title, body and author only.
The client is in a better position to tell the server how long it wants to wait before timing out, so generally allow the client to override the default timeout period.
?Timeout=3000
Caching
ETag (entity tag) response header provides a mechanism to cache unchanged resources.
It’s value is an identifier which represents a specific version of the resource. Here’s an example ETag header:
Designing an API – that’s the most difficult part. That’s why you need to spend your time there and say, “Let’s get the design right, and everything else will follow.”
It only takes one tiny little thing, just one mistake in your API that goes to production, to screw things up.
Just like Facebook: they have this issue in production, but it’s in production now and they can’t change it.
Back Burner
Filtering, Sorting & Grouping
Descriptive Error Messages
Automate end-to-end functional testing
Cross Origin Resource Sharing.
Accelerate functional testing from your CI/CD pipelines
Tests to generate realistic load scenarios and security attacks
Remove dependencies during testing and development
\<?xml version="1.0" ?\>
\<?job error="true" debug="false" ?\>
\<!--
'============================================================================
' FUSION LOG VIEWER SETTINGS
' FusLogVwSet.wsf
' Travis Illig
' tillig@paraesthesia.com
' http://www.paraesthesia.com
'
' Overview: Enables/disables custom settings for the fuslogvw.exe tool.
'
' Command syntax: (Run "FusLogVwSet.wsf /?" for syntax and usage)
'
'============================================================================
--\>
\<package\>
\<job id="FusLogVwSet"\>
\<runtime\>
\<description\>
FusLogVwSet
----
This script "enables" and "disables" custom settings for the Fusion Log Viewer tool.
Enabling settings will:
* Create a log folder (default: D:\fusionlogs)
* Add HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\LogPath and set it to the log folder
* Set HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\LogFailures to 1
* Optionally set HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\ForceLog to 1
* Optionally set HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\LogResourceBinds to 1
Disabling settings will:
* Delete the log folder and its contents
* Delete HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\LogPath
* Set HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\LogFailures to 0
* Set HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\ForceLog to 0
* Set HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\LogResourceBinds to 0
\</description\>
\<named
name="enable"
helpstring="Enable custom fuslogvw.exe settings."
type="simple"
required="false"
/\>
\<named
name="all"
helpstring="When used with /enable, logs both failures and successes. Only valid with /enable."
type="simple"
required="false"
/\>
\<named
name="disable"
helpstring="Disable custom fuslogvw.exe settings."
type="simple"
required="false"
/\>
\<named
name="logpath"
helpstring="Sets the log path (default is D:\fusionlogs). Only valid with /enable."
type="string"
required="false"
/\>
\</runtime\>
\<!-- Helper Objects --\>
\<object id="fso" progid="Scripting.FileSystemObject" /\>
\<object id="shell" progid="WScript.Shell" /\>
\<!-- Main Script --\>
\<script language="VBScript"\>
\<!\[CDATA\[
]()
'============================================================================
' INITIALIZATION
Option Explicit
'Declare variables/constants
Const SCRIPTNAME = "Fusion Log Viewer Settings"
Const VERSION = "1.0"
Const DEFAULT\_FUSIONLOGPATH = "D:\fusionlogs"
Const REG\_LOGPATH = "HKLM\SOFTWARE\Microsoft\Fusion\LogPath"
Const REG\_LOGFAILURES = "HKLM\SOFTWARE\Microsoft\Fusion\LogFailures"
Const REG\_FORCELOG = "HKLM\SOFTWARE\Microsoft\Fusion\ForceLog"
Const REG\_RESOURCEBINDS = "HKLM\SOFTWARE\Microsoft\Fusion\LogResourceBinds"
'============================================================================
'PRIMARY CODE
'============================================================================
On Error Resume Next
WScript.echo SCRIPTNAME & " v" & VERSION & vbCrLf
'Parse arguments
Dim argsSpecified
Dim argsEnable, argsDisable, argsAll, argsLogPath
argsEnable = WScript.Arguments.Named.Exists("enable")
argsDisable = WScript.Arguments.Named.Exists("disable")
argsAll = WScript.Arguments.Named.Exists("all")
If(WScript.Arguments.Named.Exists("logpath"))Then
argsLogPath = WScript.Arguments.Named.Item("logpath")
End If
'Validate arguments
If(not argsEnable and not argsDisable)Then
' Must specify either enable or disable
WScript.Echo "\*\*\* You must specify enable or disable."
WScript.Arguments.ShowUsage
WScript.Quit
End If
If(argsEnable and argsDisable)Then
' Can't enable and disable at the same time
WScript.Echo "\*\*\* You must specify EITHER enable OR disable; not both."
WScript.Arguments.ShowUsage
WScript.Quit
End If
If(argsDisable and argsAll)Then
'all is only valid with enable
WScript.Echo "\*\*\* Argument 'all' is only valid with 'enable'."
WScript.Arguments.ShowUsage
WScript.Quit
End If
If(argsDisable and WScript.Arguments.Named.Exists("logpath"))Then
'logpath is only valid with enable
WScript.Echo "\*\*\* Argument 'logpath' is only valid with 'enable'."
WScript.Arguments.ShowUsage
WScript.Quit
End If
If(argsLogPath = "" and WScript.Arguments.Named.Exists("logpath"))Then
'If logpath is specified, must put a value
WScript.Echo "\*\*\* Argument 'logpath' must have a value if specified."
WScript.Arguments.ShowUsage
WScript.Quit
End If
' Output settings
If(argsEnable)Then
If(argsAll)Then
LogMessage "Action: Enable Custom Logging - Failure and Success", 0
Else
LogMessage "Action: Enable Custom Logging - Failure Only", 0
End If
If(argsLogPath \<\> "")Then
LogMessage "LogPath: " & argsLogPath, 0
End If
Else
LogMessage "Action: Disable Custom Logging", 0
End If
' Update settings
Dim logFolder, logFolderObj, regVal
If(argsEnable)Then
' Enable settings
' Create a log folder (default: D:\fusionlogs)
If(argsLogPath = "")Then
logFolder = DEFAULT\_FUSIONLOGPATH
Else
logFolder = argsLogPath
End If
If(FolderExists(logFolder))Then
' The folder already exists; since we're deleting it when we disable
' settings, we don't want to use a pre-existing folder.
LogMessage "Folder " & logFolder & " exists. Custom log folder must not already exist.", 1
WScript.Quit(0)
End If
Set logFolderObj = fso.CreateFolder(logFolder)
If Err.Number \<\> 0 Then
LogMessage "Unable to create log folder" & logFolder, 1
WScript.Quit(-1)
End If
Err.Clear
LogMessage "Created log folder " & logFolderObj.Path, 0
' Add HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\LogPath and set it to the log folder path.
SetRegKey REG\_LOGPATH, logFolderObj.Path, "REG\_SZ"
regVal = GetRegKey(REG\_LOGPATH)
If(regVal \<\> logFolderObj.Path)Then
LogMessage "Unable to write registry key " & REG\_LOGPATH, 1
WScript.Quit(-1)
End If
LogMessage "Wrote to " & REG\_LOGPATH, 0
' Set HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\LogFailures to 1
SetRegKey REG\_LOGFAILURES, 1, "REG\_DWORD"
regVal = GetRegKey(REG\_LOGFAILURES)
If(regVal \<\> 1)Then
LogMessage "Unable to write registry key " & REG\_LOGFAILURES, 1
WScript.Quit(-1)
End If
LogMessage "Wrote to " & REG\_LOGFAILURES, 0
If(argsAll)Then
' Optionally set HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\ForceLog to 1
SetRegKey REG\_FORCELOG, 1, "REG\_DWORD"
regVal = GetRegKey(REG\_FORCELOG)
If(regVal \<\> 1)Then
LogMessage "Unable to write registry key " & REG\_FORCELOG, 1
WScript.Quit(-1)
End If
LogMessage "Wrote to " & REG\_FORCELOG, 0
' Optionally set HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\LogResourceBinds to 1
SetRegKey REG\_RESOURCEBINDS, 1, "REG\_DWORD"
regVal = GetRegKey(REG\_RESOURCEBINDS)
If(regVal \<\> 1)Then
LogMessage "Unable to write registry key " & REG\_RESOURCEBINDS, 1
WScript.Quit(-1)
End If
LogMessage "Wrote to " & REG\_RESOURCEBINDS, 0
End If
Else
' Disable settings
logFolder = GetRegKey(REG\_LOGPATH)
If(logFolder = "")Then
LogMessage "Unable to read registry key " & REG\_LOGPATH, 1
WScript.Quit(-1)
End If
If(FolderExists(logFolder))Then
' The folder exists; delete it and its contents
fso.DeleteFolder logFolder, true
If Err.Number \<\> 0 Then
LogMessage "Unable to delete log folder" & logFolder, 1
WScript.Quit(-1)
End If
Err.Clear
LogMessage "Deleted log folder " & logFolder, 0
End If
' Delete HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\LogPath
If(DeleteRegKey(REG\_LOGPATH))Then
LogMessage "Deleted registry key " & REG\_LOGPATH, 0
Else
LogMessage "Unable to delete registry key " & REG\_LOGPATH, 1
WScript.Quit(-1)
End If
' Set HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\LogFailures to 0
SetRegKey REG\_LOGFAILURES, 0, "REG\_DWORD"
regVal = GetRegKey(REG\_LOGFAILURES)
If(regVal \<\> 0)Then
LogMessage "Unable to write registry key " & REG\_LOGFAILURES, 1
WScript.Quit(-1)
End If
LogMessage "Wrote to " & REG\_LOGFAILURES, 0
' Set HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\ForceLog to 0
SetRegKey REG\_FORCELOG, 0, "REG\_DWORD"
regVal = GetRegKey(REG\_FORCELOG)
If(regVal \<\> 0)Then
LogMessage "Unable to write registry key " & REG\_FORCELOG, 1
WScript.Quit(-1)
End If
LogMessage "Wrote to " & REG\_FORCELOG, 0
' Set HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Fusion\LogResourceBinds to 0
SetRegKey REG\_RESOURCEBINDS, 0, "REG\_DWORD"
regVal = GetRegKey(REG\_RESOURCEBINDS)
If(regVal \<\> 0)Then
LogMessage "Unable to write registry key " & REG\_RESOURCEBINDS, 1
WScript.Quit(-1)
End If
LogMessage "Wrote to " & REG\_RESOURCEBINDS, 0
End If
LogMessage "Log settings update COMPLETE. You must reset IIS for changes to take effect in ASP.NET apps.", 0
On Error Goto 0
Wscript.Quit(0)
'============================================================================
' CreateNewObject
'
' Creates a new object, given a type, and performs requisite error checking.
' Exits the program if the object can't be created.
'
Function CreateNewObject(objType)
On Error Resume Next
'Create a new object
Dim obj
Set obj = WScript.CreateObject(objType)
If Err.Number \<\> 0 Then
LogMessage "Unable to create " & objType, 1
WScript.Quit(-1)
End If
Err.Clear
Set CreateNewObject = obj
On Error Goto 0
End Function
'============================================================================
' FolderExists
'
' Returns a Boolean based on whether a folder exists or not
'
Function FolderExists(foldername)
On Error Resume Next
'Create a FileSystemObject object
Dim fso
Set fso = CreateNewObject("Scripting.FileSystemObject")
'Check for the folder
FolderExists = false
FolderExists = fso.FolderExists(foldername)
Set fso = Nothing
On Error Goto 0
End Function
'============================================================================
' DeleteRegKey
'
' Deletes a given registry key
' Returns true if the delete was successful, false otherwise
'
Function DeleteRegKey(regkey\_name)
On Error Resume Next
'Create a shell object
Dim wshell
Set wshell = CreateNewObject("WScript.Shell")
'Write the regkey
wshell.RegDelete regkey\_name
If Err.Number \<\> 0 Then
'Something else went wrong
LogMessage "Unable to delete key " & regkey\_name, 1
DeleteRegKey = false
Else
DeleteRegKey = true
End If
Err.Clear
Set wshell = Nothing
On Error Goto 0
End Function
'============================================================================
' SetRegKey
'
' Sets the value for a given registry key
'
Sub SetRegKey(regkey\_name, regkey\_value, regkey\_type)
On Error Resume Next
'Create a shell object
Dim wshell
Set wshell = CreateNewObject("WScript.Shell")
'Write the regkey
wshell.RegWrite regkey\_name, regkey\_value, regkey\_type
If Err.Number \<\> 0 Then
'Something else went wrong
LogMessage "Unable to write key " & regkey\_name, 1
End If
Err.Clear
Set wshell = Nothing
On Error Goto 0
End Sub
'============================================================================
' GetRegKey
'
' Retrieves the value for a given registry key
'
Function GetRegKey(regkey\_name)
On Error Resume Next
'Create a shell object
Dim wshell
Set wshell = CreateNewObject("WScript.Shell")
'Read the regkey
Dim val
val = wshell.RegRead(regkey\_name)
If Err.Number \<\> 0 Then
'Either we don't have permission to read the key or the key doesn't exist.
' If the key doesn't exist, it's error -2147024894
If Err.Number = -2147024894 Then
'The key doesn't exist
val=""
Else
'Something else went wrong
LogMessage "Unable to read key " & regkey\_name, 1
val=""
End If
End If
Err.Clear
Set wshell = Nothing
GetRegKey = val
On Error Goto 0
End Function
'============================================================================
' LogMessage
'
' Writes a message to the event log
'
' msgType:
' 0 = Info
' 1 = Error
' 2 = Warning
Sub LogMessage(msgBody, msgType)
On Error Resume Next
'Create a shell object
Dim wshell
Set wshell = WScript.CreateObject("WScript.Shell")
If Err.Number \<\> 0 Then
WScript.Quit(-1)
End If
Err.Clear
'Figure out the error type
Dim msgTypeFull
If(msgType = 0) Then
msgTypeFull = "INFO"
ElseIf(msgType = 1) Then
msgTypeFull = "ERROR"
ElseIf(msgType = 2) Then
msgTypeFull = "WARNING"
End If
msgBody = WScript.ScriptName & " -- " & msgTypeFull & ": " & msgBody
wscript.echo msgbody
'Log the message
wshell.LogEvent msgType, msgBody
'Cleanup
Set wshell = Nothing
On Error Goto 0
End Sub
]]\>
</script\>
\</job\>
</package\>
Codemagic can easily integrate with other cloud services, if the build agents are missing service CLI’s you can use the pip command to add them to the agent as part of the build and then run the commands needed to deploy the build.
I didn’t expect that this would be the first blog post in the series!
In order to get Flutter accepted and approved at my company, I’ve been asked to demonstrate the end process of testing and deploying a flutter application.
Feel free to come back to this blog post when you’re closer to shipping your shiny new Flutter application.
Focus and the amount of Friction are two key things I look at when thinking about design, workflows and changes to them.
There is a tendency to only care about deployment and environments when they don’t work, and to make do for as long as possible.
This can be very costly as they can easily become 80% of your development effort if you make it difficult, overly bespoke or include numerous manual steps.
You won’t regret the effort or cost to set up a good CI/CD workflow, using the best available tools, you can then get back to building great Flutter applications.
I choose GitLab, having had some experience of it before, for its powerful yaml configuration, large feature set and great integrations.
It’s an all in one DevOps platform delivered as a single application, you can move really fast with GitLab.
CodeMagic has great built in support for Flutter, is highly configurable and has a set of virtual Mac’s that allows you to build & sign applications for Apple devices and the App store.
It makes the whole process much simpler and integrates well with GitLab.
Git is great, Git is powerful, but it will let you create an indecipherable mess when working with a team on many projects and features.
If it’s just you, check into one branch, usually called master or main and deploy from it. Super simple, on you go.
With a team/s there are many features, projects, people and releases that need coordinated and controlled, you will require a branch structure and a process around it.
Lots of teams are successfully using Gitflow, but it takes a little time to get developers rowing in the same direction, requires lots of merges and makes the history tricky to understand.
I wanted something simple like the single branch that would work for teams and I’ve chosen the Streamline git workflow.
If it causing friction we can adjust and if it morphs back to Gitflow in the end so be it, we have made the leap in understanding.
So now we have the tool chain to build the workflows:
The continuous integration workflow will be triggered when developers check in to any feature branch with a tag beginning “CI”.
Future posts will cover the continuous deployment workflow, automated versioning and integration and deployment for a website version.
Toolset Cost Justification
GitLab & CodeMagic
Gitlab is single application that covers the total development and operations cycle and can run advanced security tests on the software you write.
It integrates easily with most other frameworks and services allowing you to move fast and create a fully automated system in no time.
CodeMagic targets mobile integration and deployment and brings some powerful tools and features to the table:
API’s to communicate with Apple and Google, so you can sign and build apps automatically.
A farm of Macs to allow you to build for iOS and macOS from any device.
Strong integration with Google cloud to give you access to the Firestore services.
GitLab and CodeMagic integrate seamlessly together.
Whilst you could manually create servers and scripts to cover the tasks these services provide and hook them up in other CI/CD services like TFS or Jenkins, you would have to maintain them along with a number of virtual Mac agents and without support.
Even if you managed to find one person to cover this work at a modest salary of £20k per annum that would be a much greater cost.
Setting up the CI workflow
Ok enough talk lets go!
I started out by writing the list of actions I wanted the Integration workflow to carry out:
Check code quality.
Run Unit and Widget tests.
Sign and build an iOS version of the app.
Sign and build an Android version of the app.
Run Integration tests using Google’s Firebase TestLab.
Send an install on device link to testers when a build succeeds.
All of these tasks are setup in the codemagic.yaml file that you add to the root of your flutter process and configured with the Integration and Deployment tasks that you require.
The structure of the file is:
We will create the CI workflow under my-workflows.
Use the environment section to import secret values to use API’s to search and control external services including the App Store, Play Store and Firebase.
Add script tasks under scripts to build and sign the application for each platform required.
Configure artifacts to make the outputs available when the build completes.
Add recipients under email to send an email with app install links out to the testers
Setup – Service Access
In order to setup the workflow we will need access to Cloudmagic, the App Store, Play Store and Firebase services so we can use their API’s.
You will need to have or setup the service accounts before continuing, the apple developer account costs £79 per year, the google developer account is a $25 registration fee and Firebase and Cloudmagic have a free tier to get you started.
Follow the instructions in the section Service Access below, and gather the secrets:
I stored the secret info and corresponding Environment Vars Names in secure notes in my Apple account Keychain, its up to you but I would recommend you keep them safe and secure.
Setup – Adding secrets as Environment Variables
Use the Codemagic UI to easily create the group and secure environment variables:
Then include the groups in the environment section of the codemagic.yaml workflow:
Checkout the documentation for more info on common Environment Variables:
This task will run code analysis on the project and highlight code that will be difficult to maintain or puts the codebase at risk.
The task is added to the scripts section in codebase.yaml
The rules are add to the project pubspec.yaml file.
The output is save as a build artifact:
Task – Run local unit and widget tests
This task will pick up any unit or widget tests files under the /test directory of the project that end in _test.dart
Task script:
Artifact:
Task – Sign and build an iOS version of the application
Script Tasks:
Artifact:
Codemagics API to integrate with Apple really helps you out with the signing process.
It will work out what the app needs and create any certificates to complete the signing, amazing…
Task – Sign and build an Android version of the application
Script Tasks:
Artifact:
Task – TestLab
With the Google’s Firebase TestLab you can run your coded integration tests on multiple real devices and run a robo test that will discover and run through all the screens in your app, again on multiple devices.
Powerful stuff that will give you confidence that you application can run on devices that you wish to support.
The flip side is these valuable tests take time to run and that comes with a £ cost for using the service.
The issuer id and key identifier are values you saved during the creation of the API key.
The ‘APP_STORE_CONNECT_PRIVATE_KEY’ is the key you download from the App Store Connect after creating the API key, the <hash>.p8 file. Just copy the contents directly into the environment variable value.
The ‘CERTIFICATE_PRIVATE_KEY’ is an RSA 2048 bit private key to be included in the signing certificate that Codemagic creates. You can use an existing key or create a new 2048 bit RSA key:
GCLOUD_KEY_FILE – service account JSON key file, FIREBASE_PROJECT – your Firebase Project ID, you can find it under project-settings-general in the Firebase console.
For this workflow we will just create a default debug keystore directly in the codebase.yaml file, which will be fine for integration and ad-hoc testing.
When we need to deploy to the Play Store we will setup the keystore in the Google cloud, this will be covered in the continuous deployment post.
XP
Codemagic
I found most difficult thing at first was to understand the sections in codebase.yaml file.
In addition to the official documentation and google, a few of things really helped with this.
Firstly you can setup workflow using the UI and then switch to yaml configuration and export the values from the workflow you have setup.
You can use builder mode to give some contextual help.
When things go a little tougher like integrating with TestLab I eventually had to download the Firebase CLI and get the script running locally before going back to the workflow and plugging it in.
Firebase
I needed to install the API to list device models for TestLab and run tests without going through CodeMagic that was costing a lot of build minutes.
See Firebase CLI reference for more details. On the Mac you can install it with the this command.
curl -sL https://firebase.tools | bash
Then login with this command.
firebase login
Install gcloud
curl https://sdk.cloud.google.com | bash
Login
gcloud auth login
And list device models with this.
gcloud firebase test android models list
Note you can change the gcloud login account(email) with
gcloud config set account ACCOUNT
Firebase TestLab
On the free spark plan you have the following allowance
Once you have the firebase and gcloud CLI’s installed you can run TestLab tests directly and plug the commands back into the codemagic.yaml file.
To run the tests you will need to set the project id first:
gcloud config set project PROJECT_ID
You can find the project id by list all projects in your firebase console.
Do this by downloading the Android build artifacts and then modifying the codemagic.yaml command from
To
gcloud firebase test android run \
--type instrumentation \
--app app-debug.apk \
--test app-debug-androidTest.apk \
--timeout 3m
So that it points at the download build artifacts.
You can specify a device using —device, can be added multiple times for multiple devices.
--device model=redfin,orientation=portrait \
Be careful to choose a device that supports your SDK, when devices have multiple then you need to specify one:
Once you have the tests setup and passing make the changes back to codemagic.yaml and you are good to go.
BackBurner
Local Integration Tests
If you run up an emulator or attach a device then you can run all the integrations tests locally.
I would have liked to run local integration tests before the TestLab tests to catch breaking tests earlier and avoid running breaking tests on multiple devices, which is costly, time and money.
You can run the tests local using this command, but you need a device or emulator running.
Generally, the best way to learn git is probably to first only do very basic things and not even look at some of the things you can do until you are familiar and confident about the basics.
Carrot and stick is not a motivator for cognitive work
Money is only a motivator if it is an issue.
The best use of money as a motivator is to pay people enough to take the issue of money off the table.
Motivators
We care about mastery deeply
Challenge and mastery along with making a contribution.
We want to be self directed – Give you what you need and get out of your way.
We need purpose
Purpose
We are purpose maximisers not just profit maximisers
Companies are moving from a focus on profit to purpose.
It makes coming to work better.
It is a way to get better talent.
When the profit motive becomes unmoored from the purpose bad things happen
Ethically issues
Crappy products
Lame services
People need a reason to get up in the morning.
Keep purpose in your back pocket
Purpose gives you reason and reason is enough to make it work in this world.
So it is fitting to come back to it and blogging at the same time.
I’m currently (2020) at the beginning of Q4 in my career, looking for a strong finish, something interesting, this purpose can then be translated into some goals to give it direction.
To have fun along the way
Dev to Tech Lead, get out of my comfort zone, the rabbit hole.
Tech Coach, so i can travel again and surf once the kids take off.
Start or be involved in a significant project that changes peoples lives for the better.
Possibly a book or training.
With your help I would like to use my blog to discuss my current hot spots, interesting IT topics which will help me further my goals to share, to lead.
Hopefully we can help each other join a few more dots, drop off the edge of wiki and have some non-time preasured fun in the process.
If we get to what matters, I will consolidate the articles and discussions into some online training notes/book.
Were here to put a dent in the universe. Otherwise, why else even be here?
Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you havent found it yet, keep looking. Dont settle. As with all matters of the heart, youll know when you find it.
I have looked in the mirror every morning and asked myself: “If today were the last day of my life, would I want to do what I am about to do today?” And whenever the answer has been “No” for too many days in a row, I know I need to change something.
I always advise people – Dont wait! Do something when you are young, when you have nothing to lose, and keep that in mind.
Be a yardstick of quality. Some people arent used to an environment where excellence is expected.
Its very interesting, I was worth about over a million dollars when I was 23 and over 10 million when I was 24 and over a hundred million when I was 25 and it wasnt that important because I never did it for the money.
These are my research notes on coping with large entity framework models.
The current community consensus appears to be that the designer will slow up and become less to unusable around the 200+ entity mark.
Additionally a single model will not allow you to work across databases and is complicated to work with.
So given your company/customer prefers a model(over code first), what options do you have?
Btw, these are my preferred ones, they are not exclusive and when I talk about models I’m referring to EDMX files.
Separate models sharing a single context.
Hendry Ten shows you how you might automated the process of merging the MSL/CSDL/SSDL XML across models, you can only download the code on the last page of his blog.Note, that you can turn off the automated code gen and run a custom t4 template to give you more control over the boundaries e.g exclude generating class code for common classes included in multiple models.
Working across database boundaries
Rachel Lim points out that you can do this by tricking EF if your database supports synonyms.
Multiple models each with there own context
Rune Gulbrandsen has a handy technique if you just want to read some data from another context.Cheats aside you can use the unit of work pattern in a service above the data access/repository to orchestrate transactions between contexts. Slauma gives an example how this might be done here.Having the same ‘common’ entities like reference data in multiple models should only be a problem if are using a custom code generation script and then you can add some rules to stop the duplicate class outputs.
Seperate schema files
If you just want to seperate out the schema in your .edmx files take a look at the MultiFileEDMX project.
The issues may well be fixed in later releases of Entity Framework, but nothing will fixed large model complexity.
I prefer working with small models say less than 20 entities.
When working with smaller models it is easier to see the domain themes and the model names are descriptive in the solution like ‘Sales’.
It is what Eric Evans describes when he explains aggregate roots, Ward Bell leads nicely into this is his blog coping with large models.
That’s it so far…
Extras
Perry Marchant wrote a good article on performance and entity framework.
CSDL – Conceptual schema definition language is an XML-based language that describes the entities, relationships, and functions that make up a conceptual model of a data-driven application.
SSDL Store schema definition language is an XML-based language that describes the storage model of an Entity Framework application.
MSL Mapping specification language is an XML-based language that describes the mapping between the conceptual model and storage model of an Entity Framework application.
I recently found myself in the position where I needed to reassemble an ASP.Net site from its deployed files.
The first part was easy, create a website project and pull in the front-end files, then it was on to the harder parts, recreating the backing code and making sense of the minified files.
The first part was easy, create a website project and pull in the front-end files, then it was on to the harder parts, recreating the backing code and making sense of the minified files.
To recreate the code I used Resharper 6’s new code disassemble feature (F12 when in the object browser) and being a small project it was feasible to do this manually rather than via scripts or automation.
I wanted to write this post to highlight a couple of issues, gotcha’s that cost me some time.
I recently needed to cast an object from the Enterprise Library Caching block back to a type, but start getting the popular System.NullReferenceException error in my tests if the value returned from the caching block was null.
The problem is that you cannot cast null back to a non-nullable type like an int.
The answer was to use default(T) this will return null or the default value for the type.
public T Get(string cacheKey)
{
return (T) (_manager.GetData(cacheKey) ?? default(T));
}