Skip to main content

 

Neuron version selection

Our newest release provides users additional control over the version of Neuron used to compile a service. This is helpful to provide consistency across a suite of services, especially if you’d like to perform additional testing on the newest version of Neuron before upgrading. Currently, all new services use the newest version of Neuron by default. Now, tenant admins can choose a default behavior for both new services and new versions of existing services. For new services, they can specify one version of Neuron or choose between the most recent stable version (RecentStable) or a more “bleeding-edge” ReleaseCandidate version, which will include any additional updates outside our standard release schedule. For updates to existing services, the new default behavior is to use the same Neuron version as the original service, however, admins may also choose RecentStable or ReleaseCandidate.

Along the same lines, previously, any service compiled using an older version of Neuron could be recompiled using the latest version. Now, individual users can choose to recompile a service using any version of Neuron they wish. This is helpful if you’d like to upgrade an older service, but aren’t familiar with (or your organization hasn’t officially approved) the latest version of Neuron.

 

Analytics

Spark Insights now provides more inclusive statistics, especially for users of our Import/Export (ImpEx) tool.

 

Testing

 

Explore test cases in API Tester

Testing users who wish to review a particular test case from their testbed results, only used to have the option of copying and pasting the data into the API Tester to conduct their review. Spark’s Testing Center now provides a method to pre-load the API Tester with the data from any test case in your testbed. All you need is the UUID, which is included when you download the testbed results as an Excel file. You can see step-by-step directions in our User Guide.

 

Systematic test case generation

Users tasked with testing often like to examine test cases presented in a more systematic (as opposed to randomised) way. Rather than reordering the testbed results in the downloaded Excel file, users will now see the “Systematic test case generation option” when creating a testbed. This option produces test cases in order, based on all possible permutations of the inputs specified. For complete information, please see our User Guide.

 

Xcall

To maintain consistent behavior between Excel and Spark, we have removed the ability to specify subservices in Xcall using dot (“.”) notation (for example: =C.XCALL("TestFolder/BasicService.Subservice1"). Users of Xcall should now specify subservices via the #service_category request_meta metadata field, as demonstrated in the examples below.

Please be aware: all existing Xcalls with subservices will need to be changed to the new format.

  • 1.

    Pass a single subservice as an inline argument to the function call:

=C.SPARK_XCALL("SERVICE_URI",D6:E13,,"#service_category","Subservice1")

2. Pass multiple subservices as an inline argument to the function call:

=C.SPARK_XCALL("SERVICE_URI",D6:E13,,"#service_category","Subservice1,Subservice2,Subservice3")

3. Pass multiple subservices by cell reference as an inline argument to the function call:

=C.SPARK_XCALL("SERVICE_URI",D6:E13,,"#service_category",C4:C6) In this example, cells C4 to C6 contain the names of the sub-services.

If no subservice name is provided, the default subservice will be assumed. For complete documentation on using subservices with your Xcalls, see our documentation.

Be the first to reply!