Category: .NET Core

Rate Limiter–CORS limited

The problem is that , for the site ( Angular , React, plain HTML ) deployed into wwwroot of .NET Core, I want to have unlimited requests. Also, if I made requests to localhost ( when I try from local), I want also unlimited requests. However, if other site make requests to my AP( CORS enabled ) I, should have limited requests .

There is the Stefan Prodan package – https://github.com/stefanprodan/AspNetCoreRateLimit  . And in .NET Core 7 has appeared https://learn.microsoft.com/en-us/aspnet/core/performance/rate-limit  with 4 limiters  -please be sure that you read the article https://learn.microsoft.com/en-us/aspnet/core/performance/rate-limit .

Back to the problem:

So we have 2 limits; NoLimit if same site and a simple limiter( 3 request per minute )_ if coming via other site.

var noLimit = RateLimitPartition.GetNoLimiter(“”);

Func<string, RateLimitPartition<string>> simpleLimiter =
     (string address) =>
RateLimitPartition.GetFixedWindowLimiter(address, _ =>
{
     return new FixedWindowRateLimiterOptions()
     {
         PermitLimit = 3,
         Window = TimeSpan.FromMinutes(1),
         QueueProcessingOrder = QueueProcessingOrder.OldestFirst,
         QueueLimit = 1
     };

});

Now, let’s see how we apply :

  1. verify if the host is unknown or  localhost – then no limit.
  2. verify if header has Origin key
    1. if has not – could be same site or desktop – then no limit ( if the xhr request to same site, Edge sends “Origin” header – Chrome does not! – same as desktop )
    2. if has – verify if it is the same with host
      1. if it is, then no limit
      2. if it is not, then simple limit
  3. return default simple limit

And this is the code

builder.Services.AddRateLimiter(opt =>
{
     opt.RejectionStatusCode = StatusCodes.Status429TooManyRequests;
     opt.OnRejected = (ctx, ct) =>
     {
         ctx.HttpContext.Response.Headers.Add(“tiltLimit”, “please try later”);
         return ValueTask.CompletedTask;
     };
     opt.AddPolicy(“UnlimitMeAndLocalHost”, context =>
     {
        
         var host = context.Request.Host;
         var hostName = host.HasValue ? host.Host : “”;
         if (string.IsNullOrWhiteSpace(hostName))
         {
             Console.WriteLine(“no host???”);
             return simpleLimiter(“”);
         }
         if (string.Equals(hostName,”localhost”,StringComparison.InvariantCultureIgnoreCase))
         {
             Console.WriteLine(“localhost have no limit”);
             return noLimit;
         }
         //chrome does not send origin, if same site
         var hasOrigin = context.Request.Headers.TryGetValue(“Origin”, out var origin);
         //maybe also verify referer?
         if (!hasOrigin)
         {
             Console.WriteLine(“no origin -same site?”);
             return noLimit;

        }
         //edge sends origin
         var originHost = origin.ToString();
         //removing scheme
         if (originHost.StartsWith(“http://”))
         {
             originHost = originHost.Substring(7);
         }
         if (originHost.StartsWith(“https://”))
         {
             originHost = originHost.Substring(8);
         }
         var fullHost = context.Request.Host.Host;
         Console.WriteLine($”has origin {originHost} , full host {fullHost}”);
         if (string.Equals(fullHost,originHost, StringComparison.CurrentCultureIgnoreCase))
         {
             Console.WriteLine($”same site – no cors”);
             return noLimit;
         }
         //return noLimit;
         return simpleLimiter(origin.ToString());
     });

Pretty complicated – read again the algorithm above.

As a proof of concept , browse to https://ignatandrei.github.io/TILT/tilt/public  and to http://tiltwebapp.azurewebsites.net/AngTilt/ with developer tools open –  you will see 429 errors for the first, but not for the second.

And for seeing in action , execute 4 times fast this powershell

cls
$x=””
$hostName = “https://tiltwebapp.azurewebsites.net”
#$hostName = “http://localhost:9900″

$fullUrl = $hostName + “/api/PublicTILTs/LatestTILTs/ignatandrei/1”

$session = New-Object Microsoft.PowerShell.Commands.WebRequestSession
$session.UserAgent = “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36 Edg/107.0.1418.62”
$x = Invoke-WebRequest -UseBasicParsing -Uri $fullUrl `
-WebSession $session `
-Headers @{
  “Origin” = “http://www.foo.com/”
“Accept-Encoding”=”gzip, deflate, br”
   “Accept-Language”=”en-US,en;q=0.9”
   “Referer”=”https://netcoreusefullendpoints.azurewebsites.net/swagger/index.html”
   “accept”=”application/json”
   “sec-ch-ua”=”`”Microsoft Edge`”;v=`”107`”, `”Chromium`”;v=`”107`”, `”Not=A?Brand`”;v=`”24`””
   “sec-ch-ua-mobile”=”?0″
   “sec-ch-ua-platform”=”`”Windows`””
}

$x.StatusCode
Write-Host ” ————————–“

NetCoreUsefullEndpoints–part 6–passing to .NET 7

So  .NET 7 has appeared and I decided to pass NetCoreUsefullEndpoints to .NET 7 .

Also, for RateLimiter , I have considered that is good to know if the request is local or remote … so I decided to add connection ( remote IP, local IP, and more details) to the nuget package.

So I have created for .NET 6 this :

So , first things first : modify the version from  6.2022.1203.1551 to  7.2022.1203.1551 ( my versioning scheme is .NETCore version compliant of the package.year.MMdd.HHmm  – pretty stable and easy to decide what package you should add)

Then I want to profit to show in swagger the return type with TypedResults – so , as an example , I have modified from

route.MapGet(“api/usefull/httpContext/Connection”, (HttpContext httpContext) =>
{
var con = httpContext.Connection;
if (con == null)
{
     return Results.NoContent();
}
var conSerialize = new
{
     LocalIpAddress = con.LocalIpAddress?.ToString(),
     RemoteIpAddress = con.RemoteIpAddress?.ToString(),
     con.RemotePort,
     con.LocalPort,
     con.ClientCertificate,
     con.Id
};
return Results.Ok(conSerialize);

})

to

route.MapGet(“api/usefull/httpContext/Connection”,

Results<NoContent, Ok<object>>
(HttpContext httpContext) =>
{
var con = httpContext.Connection;
if (con == null)
{
     return TypedResults.NoContent();
}
var conSerialize = new
{
     LocalIpAddress = con.LocalIpAddress?.ToString(),
     RemoteIpAddress = con.RemoteIpAddress?.ToString(),
     con.RemotePort,
     con.LocalPort,
     con.ClientCertificate,
     con.Id
};
return TypedResults.Ok((object)conSerialize);
})

As you see , a pretty easy modification – indicating the INesteHttpResult Results<NoContent, Ok<object>>  ( so the swagger understand the 2 different return types )  and returning TypedResults instead of Results

Also the github ci must add the .NET Core and Azure App Service should be going to .NET 7 STS

[Nuget] dotnet-run-script

I found this awesome package – https://github.com/xt0rted/dotnet-run-script . It is good to make macros in global.json, then execute in a CICD scenario.

For example, NetCoreUsefullEndpoints used this in yaml ( pretty standard )

# – name: Restore dependencies

#   run: |

#     cd src

#     cd UsefullEndpoints

#     dotnet tool restore

#     dotnet pwsh readme.ps1

#     dotnet restore

# – name: Build

#   run: |

#     cd src

#     cd UsefullEndpoints

#     dotnet build –no-restore

# – name: Pack

#   run: |

#     cd src

#     cd UsefullEndpoints

#     cd UsefullExtensions

#     dotnet pack -o ../nugetPackages  –include-symbols –include-source

Now I have a global.json

{

“scripts”: {

“make_readme”:”dotnet pwsh readme.ps1″,

“prebuild”:”dotnet restore”,

“build”: “dotnet build –no-restore”,

“test”: “dotnet test –configuration Release”,

“prepack”:”dotnet r build”,

“pack”: “cd UsefullExtensions &&  dotnet pack -o ../nugetPackages  –include-symbols –include-source”

}

}

and the yaml is

– name: Restore dependencies

run: |

cd src

cd UsefullEndpoints

dotnet tool restore

dotnet r make_readme

dotnet r pack

Pretty easy  -and can be reproduced on local if you want, not just in CICD actions on source control…

TILT- Telemetry/Observability for FE and BE-part 27

Now it is the case to monitor the calls how they arrive from Web ( frontend ) and continue to the backend.

Fortunately, it is very simple in Application Insights

There are 2 relevant links for Angular

https://devblogs.microsoft.com/premier-developer/angular-how-to-add-application-insights-to-an-angular-spa/

https://learn.microsoft.com/en-us/azure/azure-monitor/app/javascript-angular-plugin

https://learn.microsoft.com/en-us/azure/azure-monitor/app/javascript?tabs=snippet#enable-distributed-tracing

The code is relative well written and appears ok. The fact that the AppInsights SDK is on the environment.ts is somewhat disturbing – but there is a way to circumvent it – see https://learn.microsoft.com/en-us/azure/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings .

So now a large view of the calls looks like

And, if you want to see all calls

TILT-Count TILTS for each user-part 27

I just wanted to know for each user how many tilts have each user/ url . As a Business Requirement , it is not a big deal. Let’s see what it means for a programmer .

1. Add functions to the backend to calculate the count

2. Add to the frontend call to the function

3. Figure where in the frontend this information must be shown ( yes, Business Analysis is here )

4. Display and correct eventual errors.

Those are the files modified ( .cs for backend, other for backend)

src/backend/Net6/NetTilt/NetTilt.Logic/PublicTILTS.cs
src/backend/Net6/NetTilt/NetTilt/NetTilt.WebAPI/Controllers/PublicTILTsController.cs
src/frontend/AngTilt14/src/app/app.module.ts
src/frontend/AngTilt14/src/app/one-public-tilt/one-public-tilt.component.html
src/frontend/AngTilt14/src/app/one-public-tilt/one-public-tilt.component.spec.ts
src/frontend/AngTilt14/src/app/one-public-tilt/one-public-tilt.component.ts
src/frontend/AngTilt14/src/app/public-tilts/public-tilts.component.html
src/frontend/AngTilt14/src/app/public-tilts/public-tilts.component.spec.ts
src/frontend/AngTilt14/src/app/public-tilts/public-tilts.component.ts
src/frontend/AngTilt14/src/app/public-tilts/publicTilt.ts
src/frontend/AngTilt14/src/app/services/public-tilts.service.ts

( this list was obtained with git diff  cd75105 8e0ba8c –name-only )

And this is without counting the thinking of when this data should be obtained. You can see the end result at https://tiltwebapp.azurewebsites.net/AngTilt/tilt/public/ignatandrei

TILT-Passing to IAsyncEnumerable instead of array–part 25

When tansmitting an array of data to an application , usually the application transfers all data – or in chunks – if it is a large numbers ( page 1, page 2 and so on). Both are having drawbacks

  • all data from start means that the time to see the data increases with the number of data
  • paging the data means to show the user the number of pages and an action from the user to go further to the next page.

What if I can use IASyncEnumerable to push data one by one to the GUI ?

So – those are the steps for the backend (.NET ) and frontend( Angular ) to pass the transfer from array to one by one ( IASyncEnumerable)

Backend

I have a function that loads the data from database and transforms into an array . Also caches the data

private async Task<TILT_Note_Table[]?> LatestTILTs(string urlPart, int numberTILTS){
    if (cache.TryGetValue<TILT_Note_Table[]>(urlPart, out var result))
    {
        return result;
    }
    //find id from urlPart - if not found , return null
    //caches the data and returns the array
}

Also a controller that sends the data

 public async Task<ActionResult<TILT_Note_Table[]?>> LatestTILTs(string urlPart, int numberTILTS, [FromServices] ISearchDataTILT_Note searchNotes)
{
    var data = await publicTILTS.LatestTILTs(urlPart,numberTILTS);
    if (data== null)
    {
        return new NotFoundObjectResult($"cannot find {urlPart}");
    }
    return data;
}

Also some tests that verifies that when I post a new tilt, the numbers is 1

Step 1: transform from array to IASyncEnumerable

transformation of the main function

 private async Task<IAsyncEnumerable<TILT_Note_Table>?> privateLatestTILTs(string urlPart, int numberTILTS)
        {
            if (cache.TryGetValue<TILT_Note_Table[]>(urlPart, out var result))
            {
                return result.ToAsyncEnumerable();// modification here
            }
    //find id from urlPart - if not found , return null
    //caches the data and returns the array.ToAsyncEnumerable();
        }

transformation of the controller- just add IAsyncEnumerable

 public async Task<ActionResult<IAsyncEnumerable<TILT_Note_Table[]>?>> LatestTILTs(string urlPart, int numberTILTS, [FromServices] ISearchDataTILT_Note searchNotes)
    {
        var data = await publicTILTS.LatestTILTs(urlPart,numberTILTS);
            return new NotFoundObjectResult($"cannot find {urlPart}");
        }
        return Ok(data);
    }

Also for the tests you can add .ToArrayAsync()

Step 2: Get rid of the Task

So now the obtaining of TILTS looks like this

private async IAsyncEnumerable<TILT_Note_Table> privateLatestTILTs(string urlPart, int numberTILTS)
{
    if (cache.TryGetValue<TILT_Note_Table[]>(urlPart, out var result))
    {
        //why I cant return result.ToAsyncEnumerable() ?
        await foreach (var item in result.ToAsyncEnumerable())
        {
            await Task.Delay(1000);
            yield return item;
        }
    }
    //same with retrieving data
    //when sending back data, we have to send one by one , as for the caching

The controller looks pretty much the same

[HttpGet("{urlPart}/{numberTILTS}")]
public ActionResult<IAsyncEnumerable<TILT_Note_Table>> LatestTILTs(string urlPart, int numberTILTS, [FromServices] ISearchDataTILT_Note searchNotes)
{
    var data =  publicTILTS.LatestTILTs(urlPart,numberTILTS);
    
    if (data== null)
    {
        return new NotFoundObjectResult($"cannot find {urlPart}");
    }

    return Ok(data);
}

And this is the modification for the backend . If you want to see in action , open your browser to https://tiltwebapp.azurewebsites.net/api/PublicTILTs/LatestTILTs/ignatandrei/100000 and see TILTS how they come one by one

Frontend

In Angular I have obtained the whole array at once .

 public getTilts(id:string, nr:number): Observable<TILT[]>{
    return this.http.get<TILT[]>(this.baseUrl+'PublicTILTs/LatestTILTs/'+id + '/'+nr)
    .pipe(
      tap(it=>console.log('received',it)),
      map(arr=>arr.map(it=>new TILT(it)))
    )
    ;
 }

Now we are obtaining one by one – how to let the page knows that it has an array instead of modifying also the page ?

( Yes, the display could be modified to accumulate – but I want minimal modifications to the page)

To obtain one by one we display the fetch

 //https://gist.github.com/markotny/d21ef4e1af3d6ea5332b948c9c9987e5
  //https://medium.com/@markotny97/streaming-iasyncenumerable-to-rxjs-front-end-8eb5323ca282
  public fromFetchStream<T>(input: RequestInfo, init?: RequestInit): Observable<T> {
    return new Observable<T>(observer => {
      const controller = new AbortController();
      fetch(input, { ...init, signal: controller.signal })
        .then(async response => {
          const reader = response.body?.getReader();
          if (!reader) {
            throw new Error('Failed to read response');
          }
          const decoder = new JsonStreamDecoder();
          while (true) {
            const { done, value } = await reader.read();
            if (done) break;
            if (!value) continue;
            decoder.decodeChunk<T>(value, item => observer.next(item));
          }
          observer.complete();
          reader.releaseLock();
        })
        .catch(err => observer.error(err));
      return () => controller.abort();
    });
  }

And for modifying the function to have an array, instead of just one, RXJS scan to the rescue

 public getTilts(id:string, nr:number): Observable<TILT[]>{
    return this.fromFetchStream<TILT>(this.baseUrl+'PublicTILTs/LatestTILTs/'+id + '/'+nr)
    .pipe(
      tap(it=>console.log('received',it)),
      map(it=>new TILT(it)),
      scan((acc,value)=>[...acc, value], [] as TILT[])
    );  
}

You can see the final result ( with 1 sec delay between tilts, to be visible ) here http://tiltwebapp.azurewebsites.net/AngTilt/tilt/public/ignatandrei

TILT–Details for programmers- part 24

I have organized the About in order to show more details. See https://tiltwebapp.azurewebsites.net/AngTilt/

Zero and the most important, the date when the CI was done

First , Licences – .NET Core and Angular . Usefull to know.

Second, Info about Versions – Repo and history – UI and JSON – mostly for making managers happy .

Third, Automation – Swagger and Blockly Automation – in order for others to try how to interact.

Fourth , Info about deployment – HealthCheck and info about deployment Environment – user, environment, error – for SRE .

And to have something new, this is the map of the API’s

obtained with the NetCoreUsefullEndpoints ( https://tiltwebapp.azurewebsites.net/api/usefull/graph/text ) – and digraph rendering https://dreampuf.github.io/GraphvizOnline/

Tools Used

Visual Studio

Visual Studio Code

https://github.com/ignatandrei/RSCG_AMS

https://github.com/ignatandrei/NetCoreUsefullEndpoints/

https://github.com/ignatandrei/blocklyAutomation/

https://github.com/domaindrivendev/Swashbuckle.AspNetCore

Licences for .NET Core and Angular–part 24

I was curious about the licences that .NET Core and Angulare are using.

It was interesting to find that ng build (https://angular.io/cli/build) has a –extract-licenses flag- and creates 3rdpartylicenses.txt

For .Net Core I have found https://github.com/tomchavakis/nuget-license that creates a file with

dotnet dotnet-project-licenses -i NetTilt\NetTilt.WebAPI -o –outfile NetTilt\NetTilt.WebAPI\wwwroot\netcorelicences.txt -t

And I copy those , in the CI , to the root of the site.

You can see the final result at https://tiltwebapp.azurewebsites.net/AngTilt/about – the number of licenses is overwhelming.

Angular : https://tiltwebapp.azurewebsites.net/3rdpartylicenses.txt

( and it is missing core dependencies of Angular ….) NetCore : https://tiltwebapp.azurewebsites.net/netcorelicences.txt


Tools used

VS

VSCode

https://angular.io/cli/build

https://github.com/tomchavakis/nuget-license

TILT- Docker with Ductus.FluentDocker–part 23

I have already tests with Sqlite – however, it will be better to try tests with a real SqlServer .

One of the way to have a sql server is to have docker – but how to start docker with sql server any time ?

One of answers is Ductus.FluentDocker – https://www.nuget.org/packages/Ductus.FluentDocker – and this is the code to start SqlServer:

public override void StartDatabase()
{
    //string guid = Guid.NewGuid().ToString("N");
    string uniqueId = Interlocked.Increment(ref uniq).ToString(); //Guid.NewGuid().ToString("N");
    container =
new Builder()
.UseContainer()
.WithName("sql" + uniqueId)
.UseImage("mcr.microsoft.com/mssql/server:2022-latest")
.ExposePort(1433, 1433)
.WithEnvironment("SA_PASSWORD=<YourStrong@Passw0rd>", "ACCEPT_EULA=Y")
.WaitForMessageInLog("Starting up database 'tempdb'.", TimeSpan.FromSeconds(30))
.Build()
.Start();
    ConstructServiceProvider();

}
static int uniq = 0;

I needed also a Base class for consolidating code between sql server and sqlite

  1. generating DI with for both with different context

  2. The Steps are the same = so base class

  3. The tests are the same= so base class

So this is the base class:

public  abstract partial class RealDBTests: FeatureFixture
{
    [SetUp]
    public void Start()
    {
        StartDatabase();
    }
    [TearDown]
    public void Stop()
    {
        StopDatabase();
    }

    public abstract void StartDatabase();

    public abstract void StopDatabase();

    public abstract IServiceCollection AddDB( IServiceCollection sc);


    public void ConstructServiceProvider()
    {
        serviceProvider = AddDB(new ServiceCollection())
//more DI
    }
}

Also , GitHub actions supports docker – so now TILT has a complete testing also in SqlServer.

Tools used

Docker

Visual Studio

Ductus.FluentDocker

Andrei Ignat weekly software news(mostly .NET)

* indicates required

Please select all the ways you would like to hear from me:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.