Category: .NET Core

C# integration testing in AzureDevOps with Docker containers– SqlServer and Cachet example

Every software that we make depends on others. For Stankins , as a general ETL data, it is more important to be tested with real data providers.For example, we may want to take data from Sql Server and send to Cachet . How can we have a SqlServer and a Cachet up and running easy ? The obvious answer our days is Docker.

Let’s see how a test for SqlServer looks

using FluentAssertions;
using Stankins.Alive;
using Stankins.Interfaces;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Xbehave;
using Xunit;

namespace StankinsTestXUnit
{
    [Trait("ReceiverSqlServer", "")]
    [Trait("ExternalDependency","SqlServer")]
    public class TestReceiverSqlServer
    {
        [Scenario]
        [Example("Server=(local);Database=master;User Id=SA;Password = <YourStrong!Passw0rd>;")]
        public void TestReceiverDBServer(string connectionString)
        {
            IReceive status = null;
            IDataToSent data = null;
            $"Assume Sql Server instance {connectionString} exists , if not see docker folder".w(() => {

            });
            $"When I create the ReceiverDBServer ".w(() => status = new ReceiverDBSqlServer(connectionString));
            $"and receive data".w(async () =>
            {
                data = await status.TransformData(null);
            });
            $"the data should have a table".w(() =>
            {
                data.DataToBeSentFurther.Count.Should().Be(1);
            });
            $"and the result should be true".w(() =>
            {
                data.DataToBeSentFurther[0].Rows[0]["IsSuccess"].Should().Be(true);
            });


        }
    }
}

and for cachet :



using FluentAssertions;
using Stankins.FileOps;
using Stankins.Interfaces;
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Threading.Tasks;
using Stankins.Rest;
using Xbehave;
using Xunit;
using static System.Environment;
using Stankins.Trello;
using Stankins.Cachet;

namespace StankinsTestXUnit
{
    [Trait("Cachet", "")]
    [Trait("ExternalDependency", "Cachet")]
    public class TestSenderCachet
    {
        [Scenario]
        [Example("Assets/JSON/CachetV1Simple.txt", 3)]
        public void TestSimpleJSON(string fileName,int NumberRows)
        {
            IReceive receiver = null;
           
            IDataToSent data=null;
            var nl = Environment.NewLine;
            $"Given the file {fileName}".w(() =>
            {
                File.Exists(fileName).Should().BeTrue();
            });
            $"When I create the {nameof(ReceiveRest)} for the {fileName}".w(() => receiver = new ReceiveRestFromFile(fileName));
            $"And I read the data".w(async () =>data= await receiver.TransformData(null));
            $"Then should be a data".w(() => data.Should().NotBeNull());
            $"With a table".w(() =>
            {
                data.DataToBeSentFurther.Should().NotBeNull();
                data.DataToBeSentFurther.Count.Should().Be(1);
            });
            $"The number of rows should be {NumberRows}".w(() => data.DataToBeSentFurther[0].Rows.Count.Should().Be(NumberRows));
            $"and now I transform with {nameof(SenderCachet)}".w(async ()=>
                data=await new SenderCachet("http://localhost:8000","5DiHQgKbsJqck4TWhMVO").TransformData(data)
            );

        } 

    }
}

( I have use XBehave for extensions)

Nice and easy , right ? Not so!

For up and running SqlServer I have used a docker compose file

version: '3'
services:
   db:
     image: mcr.microsoft.com/mssql/server
     ports:
       - "1433:1433"
     environment:
       SA_PASSWORD: "<YourStrong!Passw0rd>"
       ACCEPT_EULA: "Y"
     healthcheck:
       test: sqlcmd -S (local) -U SA -P '<YourStrong!Passw0rd>' -Q 'select 1'

and in AzureDevOps yaml start the containers, run the tests, collect the code coverage, stop the containers

docker-compose -f stankinsv2/solution/StankinsV2/StankinsTestXUnit/Docker/docker-sqlserver-instance-linux.yaml up -d  
        

echo 'start regular test'
        
         dotnet build -c $(buildConfiguration) stankinsv2/solution/StankinsV2/StankinsV2.sln
        
         dotnet test stankinsv2/solution/StankinsV2/StankinsTestXUnit/StankinsTestXUnit.csproj --logger trx  --logger "console;verbosity=normal" --collect "Code coverage"
         echo 'coverlet'
         coverlet stankinsv2/solution/StankinsV2/StankinsTestXUnit/bin/$(buildConfiguration)/netcoreapp2.2/StankinsTestXUnit.dll --target "dotnet" --targetargs "test stankinsv2/solution/StankinsV2/StankinsTestXUnit/StankinsTestXUnit.csproj --configuration $(buildConfiguration) --no-build" --format opencover --exclude "[xunit*]*"
        
         echo 'compose down'
         docker-compose -f stankinsv2/solution/StankinsV2/StankinsTestXUnit/Docker/docker-sqlserver-instance-linux.yaml down
        

Easy, right ? That’s because SqlServer is well behaved and has a fully functional image on Docker

That is not so easy with Cachet . Cachet requires configuration – and more, after configuration, it generates a random token for write data  ( http://localhost:8000","5DiHQgKbsJqck4TWhMVO ) .

So it will be a task for docker to export the container and import again  - easy stuff, right ? Again, not.

So I start a small docker container with

docker run -p 8000:8000 –name myCachetContainer -e APP_KEY=base64:ybug5it9Koxwhfi5a6CORbWdpjVqXxkz/Tyj4K45GKc= -e DEBUG=false -e DB_DRIVER=sqlite cachethq/docker

and then browsing to http://localhost:8000 I have configured and grab the token

Now it is time to export :

docker export myCachetContainer -o cachet.tar

And to import as an image

docker import cachet.tar  mycac

And to run the image again

docker run -p 8000:8000  -e APP_KEY=base64:ybug5it9Koxwhfi5a6CORbWdpjVqXxkz/Tyj4K45GKc= -e DEBUG=false -e DB_DRIVER=sqlite cachethq/docker

And the image stopped! After many tries and docker inspect the initial image , I have resulted to

docker run -it -p 8000:8000 -e APP_KEY=base64:ybug5it9Koxwhfi5a6CORbWdpjVqXxkz/Tyj4K45GKc= -e DEBUG=false -e DB_DRIVER=sqlite --workdir /var/www/html --user 1001:1001 mycac "/sbin/entrypoint.sh"

So the workdir, user, and the entry point are not copied into the image and you should do youurself.

The final preparing for CI with Docker for Cachet ? I have docker push myimage to Docker Hub , and I will run it from docker compose.

So now my docker compose with sql server and cachet looks this way

version: '3'
services:
   db:
     image: mcr.microsoft.com/mssql/server
     ports:
       - "1433:1433"
     environment:
       SA_PASSWORD: "<YourStrong!Passw0rd>"
       ACCEPT_EULA: "Y"
     healthcheck:
       test: sqlcmd -S (local) -U SA -P '<YourStrong!Passw0rd>' -Q 'select 1'

  cachet:
     image: ignatandrei/ci_cachet
     ports:
       - "8000:8000"
      
     environment:
       APP_KEY: "base64:ybug5it9Koxwhfi5a6CORbWdpjVqXxkz/Tyj4K45GKc="
       DEBUG: "false"
       DB_DRIVER: "sqlite"
       
     user: "1001"   
     working_dir: "/var/www/html"
     entrypoint: "/sbin/entrypoint.sh"

And I have a nice C# integration tests with Azure Devops, Docker, Sql Server and Cachet ! You can see the code coverage report at https://codecov.io/gh/ignatandrei/stankins/src/master/stankinsv2/solution/StankinsV2/Stankins.Cachet/SenderCachet.cs

.NET Core Alphabet

What I wanted is a simple application ( Web, Mobile, Desktop) that can list , alphabetically, the .NET Core keywords. What is the purpose?

  1. For interviews – suppose you want to test the people knowledge in C#. You start the application( again: Desktop or Web or Mobile) and let the candidate choose a letter. Then you see the keywords for this letter and ask the candidate to explain some of the keywords
  2. For remembering features: there are so many features in .NET language (  https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-version-history ) that for a programmer it is good to know – or to revisit – the features that are in the language.
  3. For contest within programmers  – like the interviews, but for the passionate programmers that want to have an easy way to decide the one with the best memory
  4. Maybe other uses that I do not know  ? Please share in comments

Now with the realization: What I want is the simple application, that has inside the database with keywords and links and any others. From this database, the code sources for the data will be generated and the application(s) will be generated. Also, data should be publicly available to profit from the crowd power –anyone that want to add something can add.

stankins.console execute -o ReceiveRestFromFile -a primaryData/netCoreAlphabet.json -o SenderToTypeScript -a “” -o TransformerConcatenateOutputString -a a.ts -o SenderOutputToFolder -a $(Build.ArtifactStagingDirectory)/data/ -a false

stankins.console execute -o ReceiveRestFromFile -a primaryData/netCoreAlphabet.json -o SenderToRazorFromFile -a primaryData/markdown.txt -o TransformerConcatenateOutputString -a cards.md -o SenderOutputToFolder -a $(Build.ArtifactStagingDirectory)/data/ -a false
 

And to complete all those, it will be put in an AzureDevops pipeline https://github.com/ignatandrei/netCoreAlphabet/blob/master/azure-pipelines.yml 

You can see the result on Android : https://play.google.com/store/apps/details?id=com.github.ignatandrei.netcorealphabet&hl=en  , WebSite: https://ignatandrei.github.io/netCoreAlphabet

Also, if you want , please contribute by making a PR by editing https://github.com/ignatandrei/netCoreAlphabet/blob/master/primaryData/netCoreAlphabet.json or by contributing to enchance the application by solving https://github.com/ignatandrei/netCoreAlphabet/issues

Create a new exception–add fields and/or properties

This post is not about why we need custom exception (https://blogs.msdn.microsoft.com/jaredpar/2008/10/20/custom-exceptions-when-should-you-create-them/ ) . It is (more a rant ) about a specific  item in best practices in Exceptions( https://docs.microsoft.com/en-us/dotnet/standard/exceptions/best-practices-for-exceptions )

It says:

In custom exceptions, provide additional properties as needed

Provide additional properties for an exception (in addition to the custom message string) only when there’s a programmatic scenario where the additional information is useful. For example, the FileNotFoundExceptionprovides the FileName property.

What I want to add : In EVERY exception that you create in code, DEFINE a custom field. It  is useless without !

Why this rant ? In Stankins I have to intercept KeyNotFoundException  and I want to find the Key that was not found ( problem with some dictionary ) to provide it (Yes, it is a flawed design – but this is not the point here) . The problem was the definition:

public class KeyNotFoundException : SystemException, ISerializable
{

public KeyNotFoundException();

public KeyNotFoundException(string message);

public KeyNotFoundException(string message, Exception innerException);

protected KeyNotFoundException(SerializationInfo info, StreamingContext context);
}

See the problem ? No way to find WHAT is the Key that was not found. So I ended up with this code:

name =innerKeyEx.Message;
// The given key 'nameColumn' was not present in the dictionary.
var first=name.IndexOf("'");
var last= name.IndexOf("'",first+1);
name= name.Substring(first+1,last-first-1);

Moral of the post ? Do NOT define a custom Exception without defining a field / property inside!

Dynamic loading controllers in .NET Core

I needed to load dynamically some controllers to the Stankins application .Dynamic like in – written in some text file, then loading them, The best reference was https://www.strathweb.com/2018/04/generic-and-dynamically-generated-controllers-in-asp-net-core-mvc/ . But it was not enough : what I wanted is to write the controller in some text file and compile and load them.

First problem ; What about the dll’s to be loaded  ?

I ended up with this code :

var refs=new List<MetadataReference>();
var ourRefs=Assembly.GetEntryAssembly().GetReferencedAssemblies();

foreach(var item in ourRefs)
{
	var ass=Assembly.Load(item);
	refs.Add(MetadataReference.CreateFromFile(ass.Location));
}
refs.Add(MetadataReference.CreateFromFile(typeof(Attribute).Assembly.Location));
//MetadataReference NetStandard = MetadataReference.CreateFromFile(Assembly.Load("netstandard, Version=2.0.0.0").Location);
MetadataReference NetStandard = MetadataReference.CreateFromFile(Assembly.Load("netstandard").Location);
refs.Add(NetStandard);
refs.Add(MetadataReference.CreateFromFile(typeof(object).GetTypeInfo().Assembly.Location) );             

Second problem: where to put the errors ? The simplest option ( not the best one) is to put as comments to the end of the file – at reloading I will find them and correct

using (var ms = new MemoryStream())
{
	var res = compilation.Emit(ms);

	if (!res.Success)
	{           
		string diag=string.Join(Environment.NewLine, res.Diagnostics.Select(it=>it.ToString()));
		File.AppendAllText(fileName,"/*"+diag+"*/");
		return null;
	}
}

( Better – to display the error inline…)

Third problem: The controlers are loaded at initial step of configuring .NET MVC Core 2.x , not later. The assembly loader /unloader will come as a feature at .NET Core 3 – so not yet. I figure up that the best solution is to re-load the application

static CancellationTokenSource cancel ;
        static bool InternalRequestRestart;
        public async static Task Main(string[] args)
        {
            do{
                InternalRequestRestart=false;
                cancel= new CancellationTokenSource();
                await CreateWebHostBuilder(args).Build().RunAsync(cancel.Token);
                await Task.Delay(10);//just waiting some time
                Console.WriteLine("restarting");
            }while(InternalRequestRestart);

        }
        public static void Shutdown()
        {
            InternalRequestRestart=true;
            cancel.Cancel();
        }

 

You can find the code source for loading the controllers at https://github.com/ignatandrei/stankins/blob/master/stankinsv2/solution/StankinsV2/StankinsData/GenericControllerFeatureProvider.cs

You can find the code source for restarting the application at https://github.com/ignatandrei/stankins/blob/master/stankinsv2/solution/StankinsV2/StankinsData/Program.cs

You can test the code at https://azurestankins.azurewebsites.net/receiveData ( add a new controller , hit save and then access the route – or swagger at https://azurestankins.azurewebsites.net/swagger/index.html

– or you can try with Docker with

( for linux containers)

docker run -p 5000:5000 ignatandrei/stankins_linux

( for windows containers)

docker run -p 5000:5000 ignatandrei/stankins_linux

( restart docker if any problems)

and access http://localhost:5000/receiveData

That’s all 😉

[Presentation]Angular + .NET Core => Applications for Mobile, Web and Desktop – Andrei Ignat

I was invited at http://apexvox.net/#schedule to present ] Angular + .NET Core => Applications for Mobile, Web and Desktop

It was very exciting to find so many people interested in .NET . And it was good to re-see old friends, like http://vunvulearadu.blogspot.com/2019/03/post-event-apexvox-cluj-napoca-2019.html , Daniel Costea, Ciprian Jichici and to make new ones

You can find the code and the presentation at https://github.com/ignatandrei/angNetCoreDemo/ .
If you need help, please contact me directly

Identify version for application and components for Backend(.NET Core) and FrontEnd(Angular)–part 2- backend

Part1 : Introduction and Concepts

Part 2: Obtaining BackEnd Components Version

Part 3: Obtaining FrontEnd Component Version and Final Library

Live Demo 

NPM component
Copy paste NET code

Identifying the version of the dll’s  used on the backend is fairly simple. All we need is to iterate into the current directory and find the version of the dlls.  It is just a simple controller that receives via Dependency Injection the path to the current directory. It is no harder than that code:

 

[HttpGet]

public FileVersionInfo[] GetVersions([FromServices] IHostingEnvironment hosting)

{

var dirPath = hosting.ContentRootPath;

var ret = new List&lt;FileVersionInfo&gt;();

var files = Directory.EnumerateFiles(dirPath, "*.dll", SearchOption.AllDirectories).Union(Directory.EnumerateFiles(dirPath, "*.exe", SearchOption.AllDirectories));



foreach (string item in files)

{

try

{

var info = FileVersionInfo.GetVersionInfo(item);

ret.Add(info);

&nbsp;

}

catch (Exception)

{

//TODO: log

}

}

return ret.ToArray();

}

}

Identify version for application and components for Backend(.NET Core) and FrontEnd(Angular)–part 1- introduction

Part1 : Introduction and Concepts

Part 2: Obtaining BackEnd Components Version

Part 3: Obtaining FrontEnd Component Version and Final Library

Live Demo 

NPM component
Copy paste NET code

In our days recognizing fast the version of the software you deploy it is important ( very important , if you do not have a continuum upgrade strategy – like Chrome does or in my days , using Click Once ). More, you should be able to recognize what components you are using in the software.

This was very interesting for me ( see http://msprogrammer.serviciipeweb.ro/2014/10/13/tt-add-more-informations-net-version-build/  and http://msprogrammer.serviciipeweb.ro/2019/01/14/opensource-library-versioning/ ). However, this is not about how to version the application – but how to display the version for BackEnd and FrontEnd components.

So – we need a front-end HTML page ( or WPF, or WinForm or other front-end ) and 2 informations  : 1 for the components for the backend and one for the components for the front end .

I have made such a component – it is made for Angular and .NET Core. It can be adapted very easy for all others , by transforming the  Angular library to a WebComponent library and the WEB API .NET Core into node.js  HTTP Rest.

If you want to see the final product , please check the https://www.npmjs.com/package/versions-netcore-angular  and the  live demo at https://azurestankins.azurewebsites.net/about

You will see the version of .NET Components and Angular Components that we are using.

Ping–an abstraction

In C#/.NET Core  there is a Ping Class –  there is at https://docs.microsoft.com/en-us/dotnet/api/system.net.networkinformation.ping.send?view=netstandard-2.0 . And you may ask – why we need a Ping class for such a mundane task, as pinging a PC ?

Because, like for internet method, there are differences.  Let’s take the smalles example: I want to ping 1 time a PC.

On Windows, there is ping –n count

On Linux,  there is ping –c count.

So on Windows –c flag does not exists and on Linux –n is other thing.

Moral of the story : Always look for the abstraction that can alleviate for you the need for reading( see below)/understanding  why it does not work….

PING on Windows:

Usage: ping [-t] [-a] [-n count] [-l size] [-f] [-i TTL] [-v TOS]
             [-r count] [-s count] [[-j host-list] | [-k host-list]]
             [-w timeout] [-R] [-S srcaddr] [-c compartment] [-p]
             [-4] [-6] target_name

Options:
     -t             Ping the specified host until stopped.
                    To see statistics and continue – type Control-Break;
                    To stop – type Control-C.
     -a             Resolve addresses to hostnames.
     -n count       Number of echo requests to send.
     -l size        Send buffer size.
     -f             Set Don’t Fragment flag in packet (IPv4-only).
     -i TTL         Time To Live.
     -v TOS         Type Of Service (IPv4-only. This setting has been deprecated
                    and has no effect on the type of service field in the IP
                    Header).
     -r count       Record route for count hops (IPv4-only).
     -s count       Timestamp for count hops (IPv4-only).
     -j host-list   Loose source route along host-list (IPv4-only).
     -k host-list   Strict source route along host-list (IPv4-only).
     -w timeout     Timeout in milliseconds to wait for each reply.
     -R             Use routing header to test reverse route also (IPv6-only).
                    Per RFC 5095 the use of this routing header has been
                    deprecated. Some systems may drop echo requests if
                    this header is used.
     -S srcaddr     Source address to use.
     -c compartment Routing compartment identifier.
     -p             Ping a Hyper-V Network Virtualization provider address.
     -4             Force using IPv4.
     -6             Force using IPv6.

PING on Linux:

ping [ -LRUbdfnqrvVaAB] [ -c count] [ -i interval] [ -l preload] [ -p pattern] [ -s packetsize] [ -t ttl] [ -w deadline] [ -F flowlabel] [ -I interface] [ -M hint] [ -Q tos] [ -S sndbuf] [ -T timestamp option] [ -W timeout] [ hop …] destination

Description
ping uses the ICMP protocol’s mandatory ECHO_REQUEST datagram to elicit an ICMP ECHO_RESPONSE from a host or gateway. ECHO_REQUEST datagrams (”pings”) have an IP and ICMP header, followed by a struct timeval and then an arbitrary number of ”pad” bytes used to fill out the packet.

Options
-a
Audible ping.
-A
Adaptive ping. Interpacket interval adapts to round-trip time, so that effectively not more than one (or more, if preload is set) unanswered probes present in the network. Minimal interval is 200msec for not super-user. On networks with low rtt this mode is essentially equivalent to flood mode.
-b
Allow pinging a broadcast address.
-B
Do not allow ping to change source address of probes. The address is bound to one selected when ping starts.
-c count
Stop after sending count ECHO_REQUEST packets. With deadline option, ping waits for count ECHO_REPLY packets, until the timeout expires.
-d
Set the SO_DEBUG option on the socket being used. Essentially, this socket option is not used by Linux kernel.
-F flow label
Allocate and set 20 bit flow label on echo request packets. (Only ping6). If value is zero, kernel allocates random flow label.
-f
Flood ping. For every ECHO_REQUEST sent a period ”.” is printed, while for ever ECHO_REPLY received a backspace is printed. This provides a rapid display of how many packets are being dropped. If interval is not given, it sets interval to zero and outputs packets as fast as they come back or one hundred times per second, whichever is more. Only the super-user may use this option with zero interval.
-i interval
Wait interval seconds between sending each packet. The default is to wait for one second between each packet normally, or not to wait in flood mode. Only super-user may set interval to values less 0.2 seconds.
-I interface address
Set source address to specified interface address. Argument may be numeric IP address or name of device. When pinging IPv6 link-local address this option is required.
-l preload
If preload is specified, ping sends that many packets not waiting for reply. Only the super-user may select preload more than 3.
-L
Suppress loopback of multicast packets. This flag only applies if the ping destination is a multicast address.
-n
Numeric output only. No attempt will be made to lookup symbolic names for host addresses.
-p pattern
You may specify up to 16 ”pad” bytes to fill out the packet you send. This is useful for diagnosing data-dependent problems in a network. For example, -p ff will cause the sent packet to be filled with all ones.
-Q tos
Set Quality of Service -related bits in ICMP datagrams. tos can be either decimal or hex number. Traditionally (RFC1349), these have been interpreted as: 0 for reserved (currently being redefined as congestion control), 1-4 for Type of Service and 5-7 for Precedence. Possible settings for Type of Service are: minimal cost: 0x02, reliability: 0x04, throughput: 0x08, low delay: 0x10. Multiple TOS bits should not be set simultaneously. Possible settings for special Precedence range from priority (0x20) to net control (0xe0). You must be root (CAP_NET_ADMIN capability) to use Critical or higher precedence value. You cannot set bit 0x01 (reserved) unless ECN has been enabled in the kernel. In RFC2474, these fields has been redefined as 8-bit Differentiated Services (DS), consisting of: bits 0-1 of separate data (ECN will be used, here), and bits 2-7 of Differentiated Services Codepoint (DSCP).
-q
Quiet output. Nothing is displayed except the summary lines at startup time and when finished.
-R
Record route. Includes the RECORD_ROUTE option in the ECHO_REQUEST packet and displays the route buffer on returned packets. Note that the IP header is only large enough for nine such routes. Many hosts ignore or discard this option.
-r
Bypass the normal routing tables and send directly to a host on an attached interface. If the host is not on a directly-attached network, an error is returned. This option can be used to ping a local host through an interface that has no route through it provided the option -I is also used.
-s packetsize
Specifies the number of data bytes to be sent. The default is 56, which translates into 64 ICMP data bytes when combined with the 8 bytes of ICMP header data.
-S sndbuf
Set socket sndbuf. If not specified, it is selected to buffer not more than one packet.
-t ttl
Set the IP Time to Live.
-T timestamp option
Set special IP timestamp options. timestamp option may be either tsonly (only timestamps), tsandaddr (timestamps and addresses) or tsprespec host1 [host2 [host3 [host4]]] (timestamp prespecified hops).
-M hint
Select Path MTU Discovery strategy. hint may be either do (prohibit fragmentation, even local one), want (do PMTU discovery, fragment locally when packet size is large), or dont (do not set DF flag).
-U
Print full user-to-user latency (the old behaviour). Normally ping prints network round trip time, which can be different f.e. due to DNS failures.
-v
Verbose output.
-V
Show version and exit.
-w deadline
Specify a timeout, in seconds, before ping exits regardless of how many packets have been sent or received. In this case ping does not stop after count packet are sent, it waits either for deadline expire or until count probes are answered or for some error notification from network.
-W timeout
Time to wait for a response, in seconds. The option affects only timeout in absense of any responses, otherwise ping waits for two RTTs.

OpenSource library- conclusion

Part 1

Implement Open-source library guidance

Part 2

OpenSource library – Cross-platform targeting

Part 3

OpenSource library-Dependencies

Part 4

OpenSource library- Source Link

Part 5

OpenSource library-versioning

Part 6

OpenSource library- Breaking changes

Part 7

OpenSource library- conclusion

Following the guidance from https://docs.microsoft.com/en-us/dotnet/standard/library-guidance/ it is somehow simple. Most of the rules are already implemented, other are easy to implement into a DevOps Continous Integration workflow( such as Azure DevOps) and others are hardly one half a hour.

It will be nice if we have a badge for this – I will consider making this some day ….

OpenSource library- Breaking changes

Part 1

Implement Open-source library guidance

Part 2

OpenSource library – Cross-platform targeting

Part 3

OpenSource library-Dependencies

Part 4

OpenSource library- Source Link

Part 5

OpenSource library-versioning

Part 6

OpenSource library- Breaking changes

Part 7

OpenSource library- conclusion

Following guidance from https://docs.microsoft.com/en-us/dotnet/standard/library-guidance/breaking-changes

Nr

Recomandation

AOP Roslyn

1

DO think about how your library will be used. What effect will breaking changes have on applications and libraries that use it?

2

DO minimize breaking changes when developing a low-level .NET library.

3

CONSIDER publishing a major rewrite of a library as a new NuGet package.

4

CONSIDER leaving new features off by default, if they affect existing users, and let developers opt in to the feature with a setting.

5

DO NOT change an assembly name.

6

DO NOT add, remove, or change the strong naming key.

7

CONSIDER using abstract base classes instead of interfaces.

8

CONSIDER placing the ObsoleteAttribute on types and members that you intend to remove. The attribute should have instructions for updating code to no longer use the obsolete API.

9

CONSIDER keeping types and methods with the ObsoleteAttribute indefinitely in low and middle-level libraries.

Unfortunately, AOP Roslyn is not in the stage of breaking changes. But I have had another library, Exporter, and I have had made  3:  CONSIDER publishing a major rewrite of a library as a new NuGet package

Andrei Ignat weekly software news(mostly .NET)

* indicates required

Please select all the ways you would like to hear from me:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.