Quantcast
Channel: Random thoughts on coding and technology
Viewing all 71 articles
Browse latest View live

Firebase Cloud Messaging with Delphi 10.1 Berlin update 2.

$
0
0
A comprehensive step by step guide, covering everything you need to know to receive push notifications to your Android device using Firebase Cloud Messaging and the latest Delphi 10.1 Berlin update 2.

Push notifications let your application notify a user of new messages or events even when the user is not actively using the application (downstream messaging) (Parse.com). On Android devices, when a device receives a push notification, the application's icon and a message appear in the status bar. When the user taps the notification, they are sent to the application. Notifications can be broadcast to all users, such as for a marketing campaign, or sent to just a subset of users, to give personalised information. To provide this functionality I will rely on Firebase Cloud Messaging which is the new version of GCM (Google cloud messaging) and Delphi to develop the Android application.

1. Create your Firebase project


Create your Firebase project by visiting the console if you still don't have one and in this project create an Android App.


I already have one project so I will use this one for my demo. Once in the project, go to Overview -> Add another app -> Android:


And give it a sensible name. In my case I called the package com.embarcadero.FirebaseCloudMessaging. This package name is important as it will be referenced later on. Once you click Add App, you will receive a google-services.json file which contain information that we will use later.

The package name is defined in your Delphi project:


So make sure that everything matches with the name you give to your Firebase application as the manifest file will contain this information.

2 Request your FCM Token


Now that we have our project configured, we need to request Firebase a unique token for our Android device. You can see the description here as to how to get the FCM token via Android Studio, but I will show the necessary steps to get the same value from our Delphi application.

Basically we are trying to get the same value from FirebaseInstanceId.getInstance().getToken(). We will achieve the same behaviour by using TPushServiceManager which is the unit responsible for handling push notifications.

The following code snippet tries to request the FCM token via TPushServiceManager:

Now, to allow this code to work correctly, we will have to configure few things.

a) Enter the Sender Id.

In the source code snippet above, I'm mentioning the SENDER ID. This sender id, can be found under Firebase -> Project Settings -> Cloud Messaging:


This is the value you have to put here:

PushService.AppProps[TPushService.TAppPropNames.GCMAppID] := 'SENDER ID';

Knowing that the GCMAppId is actually the Sender Id has been a quite a struggle for some users and you can see my answer on Stack overflow.

b) Configure the project to receive push notifications.

In the Delphi IDE, go to your project options -> Entitlement List and set the property Receive push notifications to true.


c) Configure the AndroidManifest.template.xml file.

Before we try to run the code above, we'll have to configure the manifestfile to grant our device permissions to connect to Firebase. If you don't configure the permissions, you might run into an exception like the one below:


Error message:EJNIException with message 'java.lang.SecurityException: Not allowed to start service Intent { act=com.google.android.c2dm.intent.REGISTER pkg=com.google.android.gms (has extras) } without permission com.google.android.c2dm.permission.RECEIVE'.

See the code snipped below for reference:

The full source code of the solution can be found here for reference where you can find the manifest files.

Once everything is configured, we can now test if we can receive the FCM token. Here is a screenshot of my project so you can see that there are two buttons, one to receive the token and the other one to store this token somewhere so the system that sends the notification knows the receiver.


Let's see the project in action here:


As you can see in the image above, I get the DeviceID and the FCM Token. The one we are interested in is the FCM Token. This token is quite large so it does not appear completely on the screen.

Now we need to configure what to do when we receive a notification and how this notification is built.

3 Receive your first FCM Push notification


The following code snipped will configure the OnReceiveNotification event and will display a notification using the TNotificationCenter class.

Notice that the ServiceNotification variable contains a DataKey member which contains a JSON envelope. This object will contain all the information of our push notification. Here you can see how this notification looks like:


Notice that the message is part of the gcm.notification.body property and this one is the one that we will use for our DataObject.GetValue method.

Let's see the application in action:



Here you can see side by side my Android device and Firebase Notification testing system. Once the application is ready to receive notifications, you just need to send the notification using the configured app or the token if you want to target a single device.

Next step is to store these tokens on the cloud and use your own system to deliver those messages.

Please, do not hesitate to contact me if you have any further questions.

Related links:


Jordi
Embarcadero MVP.

Delphi Firemonkey (FMX) rendering TCanvas for Android

$
0
0
I'm currently working on a new game for Android and one of the issues I faced so far is regarding canvas rendering. I like painting my own stuff using the canvas (here is proof of it). So imagine how I felt when I deployed the game to my Android device and noticed that nothing was being displayed while the game works perfectly under Windows.

The reason behind this change is that the component TImage renders differently now and you have to paint everything on a TBitmap.canvas component and then assign it to the original TImage component so it gets displayed correctly so it requires few tweaks. 

The idea of the game is to find and match one of the 6 images shown with the image displayed below. Once you have identified the exact match you just need to tap to it to go to the next game. The game increases difficulty when you score certain number of points. You have just 10 seconds to complete the task.

The way I've built this is by using 6 TImage components and then I render there a matrix of zeros and ones that get represented into an image so the first image on the left is actually the matrix:
[1,0,0,1]
[1,0,1,0]
[1,0,0,1]
[0,1,0,0]

The numbers get randomly generated and I make sure that no duplicates are found during the randomisation.

If you see the image below, by level 20 and 50 things get complicated:



If you look at the source code below, this code tries to paint the main core of the application and it will work perfectly well on Windows but it won't work under Android (you will just get a black screen).


To overcome this issue, we need to do the rendering a bit different. Now we need to create a custom bitmap, paint in there and then assign it to the original TImage. Here is a different version of the code above that works under Windows and Android and everything gets rendered correctly.



Here is the game running on Android (just be aware that it's still under beta testing and that there are still many things to fix):


Once finished I'll place it in google play for your amusement and share the source code. Let me know if you have any questions.

I'm still using the same concepts to my Delphi Physics Engine so I can render correctly my models on Android. I'm rewriting the library so it supports FMX and from there I'll be able to render on Android without problems!.

To load bespoke fonts for Android I had to use the following article which did the treat:



Jordi

Inside refactoring with Delphi

$
0
0
In this article I will show you two common techniques that I use in my C# projects that they are quite relevant for any other programming language out there. In this case Delphi as I'm sure many developers out there can refer to the same principles. The following code has been tested under Delphi 10.2 Tokyo version.

The first technique is quite used in Functional Programming but it can be related to OOP and it's called Imperative Refactoring. The second technique helps reducing common code and eliminates inconsistencies and it's called Inline Refactoring. See the examples below for guidance.

Imperative Refactoring

This technique is quite easy to understand and I'm sure you've applied this many times in your projects.

In this case, we have a method or function that has some code that we would like to reuse. The principle says that this code needs to be extracted and placed externally onto another function and then add the call where the previous code was. This technique is very simple and very easy to embrace for code reusability.

Here you can see a typical example:

Before Refactoring:
As you can see this is a very simple  example where I request a web page and then I do some parsing to get the list of urls that are part of the html document. Let's see how to refactor it to make it more reusable.

After Refactoring:
Notice that I've extracted the parsing functionality and I've created a parseHTML function that gets the response and parses it and returns the list of urls. Now I can reuse my parsing functionality should I have any other page where this functionality is required. No-brainer here.

Inline Refactoring

This one is a bit different and it relates to the outer code as a reusable code. Imagine that we would like to refactor the inline functionality: In this example, I'm repeating quite a lot the functionality to fetch an item from the internet but I would like to reuse it so I can a) replace the http component at any time without impacting the rest of the code and b) replace the parsing part so it can return any kind of object:

The idea behind this refactoring is to be able to reuse the external call also using anonymous methods and generics.

Here is the after refactoring code:

After Refactoring:
As you can see the idea is to use anonymous methods and generics heavily to be able to reuse most of the functionality and allow the developer to separate the concerns of downloading the page and parsing it. It also allows you to rebuild the component in a different way e.g. in this case I'm using Indy components to request the page but you might like to use another component. Using this approach everything is quite modular and it gives room for testing. Notice that no functionality has changed here.

You can find the full source code of this example in my personal repository on Github.

Jordi.
Embarcadero MVP.

Pushing Messages from Server to Client Using SignalR2 and MVC5

$
0
0
One of the biggest disadvantages of any web client is that they are stateless. They don't know if anything happens in the server side unless they request the information again and again. In this article you will learn a very useful way of pushing updates from the server to the client using SignalR. The idea behind this concept is very simple; I need to find a way to inform the user that there will be some maintenance occurring shortly on the site and that I need them to close the browser to avoid any data loss. 

This is a very simple and elegant way using the ASP.NET library SignalR. This amazing library is designed to use the existing web transport layer (HTML5 Websockets and other technologies for old browsers) and it's capable of pushing the data to a wide array of clients like web pages, windows apps, mobile apps, etc. It's extremely easy to use, real-time and it allows Developers to focus on real problem leveraging the communication issues to SignalR.

Overview

The image above shows the idea behind the implementation of the SignalR ecosystem. We need to be able to push a notification from a client and that this message gets broadcasted to every single client that's listening to the SignalR Hub.

Install SignalR

In order to use this functionality, first we need to install the SignalR (v2.2.1) library via NuGet package:

Create a folder called "Hub" in your main MVC solution and then add a new SignalR Hub class called NotificationHub.cs as show below:

This will create a class that inherits from the Hub base class.

Creating the Notification Hub

Copy the following template to generate the notification Hub. This Hub needs a method "Broadcast Message to Clients" which accepts a message as a string a user as a string and that every client will receive. It uses the Clients.All property to access all of the clients that are currently connected to the server (hub). This function is just a client side callback function that we will call from the client JavaScript side.

Next step is to create your Startup class where you will be able to enable SignalR. Under App_Start folder, create a new class called Startup,cs and add the following code:

This will allow you to map the available hubs using the Owin startup. Now that the Hub is ready, we need to focus on the client that will display the message received by it.

Showing the notification on the client

Now that the server side code is done, we need to be able to display the notification received by it on the client side. To do this we just need to add the relevant script references and the following code to the _layout.cshtml page:

This page contains the jQuery and SignalR scripts, the SignalR Hub and the proxy hub object using the "var notificationHub = $.connection.notificationHub;" command. Notice that notificationHub starts with lower case. This is actually very important! because if you don't write it in lower case the reference will not work!.

The code works in the following way. When the client connects to the hub, the message "connected to the notification hub" should be visible in your browser console and when a new message is received, the div #notificaiton should empty itself and populate itself with the message received. This div sits on a separate page:

This is the aspect of the page without any notification:


Sending the notification to the client

Now the interesting part. To send the notification to the client, we can either create a separate screen on our MVC application, or just create a small utility to send the message separately. In this case I will choose the latter as it looks to me like a cleaner approach. So here is the code of my submit message functionality (WinForms) which allows me to send push notifications to all the clients connected to the Hub with just one click:

Here is the simple screen:

Finally, if you want to see the system in action, see the animation below for reference:


With this approach, you will be able to inform all your connected users easily, irrespective of the client technology or platform. The source code of the project is still not available on my GitHub page, but I will make sure to make it available so you can test it locally and see it by yourselves.

Jordi.

Writing quality code with NDepend v2017

$
0
0
The new version of NDepend v2017 has totally blown my mind. I can't stop exploring the new and enhanced Dashboard with features like Technical debt estimation, Quality Gates, Rules and Issues. In this post I will try to summarise what's new with NDepend v2017 and how to use these new features to write quality code.

I'm sure that you have experienced the feeling when you start typing code and after few weeks down the project you don't really know If what you have designed and coded is actually good or bad (by bad I mean that it's in some way rigid, fragile or non-reusable). I'm an advocate of Continuous Integration so I do have loads of metrics that help me identify broken windows or code smells easily in my code during the check-in stage. All these metrics encompass standards such as Design, Globalisation, Interoperability, Mobility, Naming, Performance, Portability, Security, Usage and so on. But none of them give a global rating that I could easily use to check if my project is actually good or bad.

Enhanced Dashboard

This new version contains a new set of application metrics that really improved the overall quality of the product. I will start integrating this as part of my release procedure as it gives a really good grasp of the status of the project from the coding side.

Here is a quick sneak peek of the new dashboard:


The aspects I'm most interested in and that I will delve into detail are the following ones:

Technical Debt Estimation

This new feature is a MUST for me. Just analyse your project with NDepend v2017 and let it give you the percentage of technical debt according to the rules that you have configured in your project. After every analysis you can see the trend and act accordingly using this metric:

This section considers the settings I have configured for my project. In this case the debt has increased from 7.75% to 7.93% due to the increase of number of issues in the solution. It also determines that the time needed to reach band "A" is of 3 hours and 32 min. The total amount of days needed to fix all the issues is the Debt (1 day and 1 hour).

To get values closer to reality, you have to configure your project to specify how long it will take you or any member of your team to fix an issue (most of the times I just specify half a day per issue as a rule). Here you can see the settings I have specified in my solutions as a rule of thumb and that you can consider in your projects:


These settings use the following considerations:

  • Your team will mostly code 6 hour a day. The rest of the time is spent with meetings, emails, research, etc.
  • The estimated effort to fix one issue is of 4 hours. That's the minimum I would give as average. There are issues that are fixed in 5 min and there are others that might take quite a bit of time. Don't forget that this time also includes filling the ticket details in your scrum environment and documentation, etc.
  • Then depending on the severity of the issue there is a threshold specified too as you can see in the figure above.

Another aspect to consider to get a proper estimation is also the code coverage. If you configure the coverage correctly in your solution then NDepend can get that data and use it to get a more comprehensive estimation.

To configure code coverage for NDepend you can follow my steps below:


Configuring JetBrains DotCover.

Once you've run your initial analysis, NDepend will also ask you to configure Code Coverage to get more information about your project and some additional metrics.

Go to NDepend project coverage settings under the Analysis tab and in there you'll have to select the XML file generated by DotCover.


If you run your tests with ReSharper you can select the coverage option and then in that menu go to the export button and select "Export to XML for NDepend". Leave this file in a known folder so you can automate this easily later on. The goal here is to configure everything manually but then you will have to do the work around so you can trigger all this with your build agent and get the report at the end of the run.

Chose the exported file and run again your analysis:


Now with all these details if you run NDepend you should get something like this:


Now you can see proper debt and the coverage. This is a little project that I'm currently working on and that it really works to demonstrate how good NDepend is in this case. If you don't know what one of the terms means, you can just click on it and you'll be redirected to the panel with all the details about that specific metric and its description.


The following three additional panels help shaping the technical debt information: Quality Gates, Rules and Issues. Below you'll find a quick introduction on each section and its relevance.


Quality Gates

Quality gates are based on Rules, Issues and Coverage. Basically this section determines certain parameters that your project should match in order to pass "quality". So for example: your project should contain a % of code coverage, your project should not contain Blocker or Critical issues, etc.

Here are some of these gates used for your reference:


Rules

Rules are defined as Project Rules and they check for violations in your code. This is like the rules defined by FXCop and that provide real arguments as to why your code is breaking a rule or that it needs to be better. Once you've gone through several iterations of fixing these, then your code will get cleaner and better (I promise you!). And most of all, you will understand the reason behind the rule!.

Here are some of these rules:

If you think that one of these rules does not apply to your project, you can just uncheck it and the framework will take of it so you don't have to worry about it anymore.


Issues

The number of issues are just a way of grouping the rules so you can determine which ones are the important ones to fix. So you can violate few rules but then these rules are categorised between blocker and low. So even though the project is violating 18 rules, 1 of these rules is just Low. This gives you an understanding of what's important to fix and what can wait.

Then each issue has a clear definition of the time that could take to fix:




Conclusion

To conclude, writing quality code is one of my main concerns nowadays. It's really easy to write code and also code that works but the difference between code that works and excellent code is this: quality and NDepend has the solution for you.

I have been fiddling with tools like FXCop and NDepend for a while now and I must say that NDepend is a must have in my toolkit belt. Really easy to use and with just one click you can have real arguments on the issues that need to be fixed in your solution and how long the team should take to fix them.

JSON RTTI Mapper with Delphi

$
0
0
One of the patterns that I have observed a lot during the time I have been playing with JSON streams is that I create my object based on the JSON stream and then I set the properties of that particular object manually so I can work with it rather than with the JSON object itself. 

Let's observe the following example. Imagine that we have the following JSON stream:


As you can see this JSON represents a list of Employees and each employee has the properties Name, Surname, Age and Address. So if we want to hold this in a TList<T> then we will have to create a class TEmployee with those properties and then manually assign each JSON parameter to each property like the example below:

The code is quite straight forward. We need to loop through the JSON array and populate the list.

If you look closely, you will see that basically we are mapping a field in the JSON object that it's called "name" to an object property called "Name". So to make it simpler this would literally be something like this:

Any mapper out there does one simple job and it's the job of mapping one field from one source to another.

So the question here is how to achieve this in a more clever way? Easy, let's use RTTI to map those properties!

Using the methods TypInfo.SetStrProp and TypInfo.GetPropList you can easily explore and the list of published properties of your class and set the value of them. To make use of the RTTI capabilities, you will have to move those properties to the published section of the class so they are visible through the RTTI.

Now you know how to use the RTTI to read the list of published properties and set them to a specific value. These examples have been coded with Delphi 10.2 Tokyo and you can find part of the mapper in one of the projects I'm currently working on: COCAnalytics.

There are many libraries out there that do amazing things with JSON so it's up to you to explore them. At least now you know how to map using the RTTI.

Happy coding!.

Jordi
Delphi MVP.

Make your Delphi applications pop with Font Awesome!

$
0
0
Yes you heard right, Font Awesome? You can use their icons to make your desktop applications pop. Nowadays I use it to make my websites look nicer and without having to worry about finding icons for my apps and edit them and so on. Font Awesome is one the smartest things you can use to make your applications pop.

Use Font Awesome icons in Desktop apps

First download Font Awesome and install it in your computer. At the time of this article I was using version 4.7.0 so I downloaded font-awesome-4.7.0.zip and installed the FontAwesome.otf file on my Windows 10 machine:



Font Awesome provides a cheatsheet that can be used to copy and paste the icons directly in your app so you don't have to worry about memorising any particular code to make the icon appear:



Nowadays I use the common approach where I buy certain icons or draw them myself using Photoshop (although this second option is quite time consuming and I only do it when I want to achieve the best results).


I'm sure you are all familiar with this approach, you add your icon in bmp format into one ImageList component, then link the ImageList to the button and select the icon index so it appears in your button as displayed in the image above. The problem arises when you want to have different sizes of that button as you will have to have different icon sizes to match it and so on.

So one of the things that I tend to do now in my applications and that makes it look a bit more standard (in terms of user experience as the user sees the same type of icon throughout the application) is by using Font Awesome:


The animation above displays how to replace one of you icons with a Font Awesome icon easily:

  • Locate the icon you want in the Font Awesome cheat-sheet.
  • Copy the icon image (not the Unicode code).
  • Locate the component you want to add the icon to.
  • Select Font Awesome font.
  • Paste the icon in the caption or text zone.
  • Adjust size to your needs.

In the images below you can compare the before/after and you'll see that the difference is noticeable:

Before (mixture of icons in bmp format plus some png images made with Photoshop):


After (Font Awesome fonts replacing all the icons):

Notice that now I can even include icons where there should only be text! so using this way I can compose my headers in a nicer way and include a very descriptive icon.

You will need Font Awesome font installed on your machine or target machine in order to take advantage of this functionality. I've used the latest Delphi 10.2 Tokyo on this one if anyone was wondering about it.

The example above is for VCL only and it should also for for FMX applications.

Example with FMX:


Jordi
Embarcadero MVP

Detecting Ajax Requests In ASP.NET MVC 5

$
0
0
Security is one of my major concerns nowadays and it is quite common that someone will try to browse to a particular URL given any chance. To avoid this, we can detect whether the request came from an Ajax request or from a normal browser request.

Within ASP.NET MVC5 applications is quite easy to check if a request is being made via AJAX through the extension named IsAjaxRequest() method that is available on the Request object. The IsAjaxRequest() actually works by simply performing a check for the X-Requested-With header.

The example below will show you how wrap this functionality inside an Action Filter so you can add this feature to any controller you want to perform this check upon.

Creating the Action Filter:

Then just use the attribute in any of the actions you want to perform this operation:

And now, if you try to browse that action from a normal request you will get the following error:


Now you can make your site more secure without worrying about someone inadvertently clicking to  the action.


Configure TeamCity to access private GitHub Repositories

$
0
0
One of the challenges I have been facing lately after moving to private repositories on GitHub is the ability to access them via TeamCity. The issue is that now the repository is not accessible via https and you have to find an alternative to retrieve the source code of your repository securely.

For this task, I will show you how to use GitHub Deploy keys and how to configure TeamCity to interact with your private repository.


The overall idea can be seen in the figure above. First, we will have to create the keys so we can place them in the required section. To generate the keys, you can just use Git Bash and using the command below:


Once finished, you will have two keys, the private and the public one.

Installing the public key in GitHub:

The operation above should've produced 2 keys (files):

  • id_rsa (private key)
  • id_rsa.pub (public key)

Open the file id_rsa.pub or run the following command on your git bash console to copy the content of the file into the clipboard: clip < ~/.ssh/id_rsa.pub

Now, go to your private repository on GitHub and select "Settings" and then "Deploy Keys". Once there, click on "Add deploy key" and paste the content of the file you've opened before / copied into the clipboard.


Once completed, you should see something like the image below (note that the image below shows that the key has been already used):


Installing the private key in TeamCity:

The following operations have been done to the latest version of TeamCity at the time of the publication of this article (2018.1 build 58245). I tried version 2017 initially and the configuration didn't work (just so you know if you are still on any version prior to 2018.1):

Click on your project overview and click on "Edit project Settings". Select "SSH Keys" and click "Upload SSH Key" button to upload your id_rsa file:


Now the SSH key will be available in your VCS Root. Now go to your build step and add a Git VCS Root that will pull the source code from the repository. The parameters that you have to configure are as follow:

  • VCS Root Name: Name of your VCS.
  • Fetch URL: URL of your repository in format git (not in https format as it will not be available because the repository is private). In this case you will have to change the https URL by this other git one as shown in the sample below:
  • Default branch: refs/heads/master
  • Authentication method: Uploaded Key
  • Username: empty (don't type anything here)
  • Uploaded Key: id_rsa (is the one that I've just uploaded)
  • Password: type the secret word you have configured in your private key if any.

If you now test the connection, it should be successful:


If you have a look at your project, you will see that the project is successfully connecting to your repository and pulling out the changes that are pending to be implemented in your pipeline:

I hope you find it useful as I have spent quite a lot of time just trying to find the right approach.

Jordi

Creating a File Server using ASP.NET Core 2.1 Static Files Middleware

$
0
0
One of the coolest features of ASP.NET Core is the ability to serve static files on HTTP requests without any server-side processing. This means that you can leverage this technology to build your own file server to serve static files over the wire. I managed to code a quick solution over the weekend to be able to upload/download files to one of my servers easily with this technology.

It all started with this Tweet:


The issue here is that both of us spend loads of time taking pictures and videos of our daughter and end up sharing all this data via Whatsapp. By the end of the day we both have the same information but in different resolutions and qualities which makes our life quite difficult when trying to guess which picture has the highest quality for printing (we tend to print most of the pictures). So we needed something simpler and quicker where to store all this pictures and videos (keeping the highest possible quality) and that it would be easier for both of us to share.

.NET Core and Ngrok to the rescue. 

With Ngrok, I can easily expose one of my websites to the world via one of the ngrok tunnels. I do own a professional account with them and it really made my life much easier as I can expose whatever I need to the world without having to tinker with my router. This helps me to expose services from my Raspberry Pi's and from my Servers.

Using .NET Core Static files middleware, I was able to build a quick solution (accessible via any browser and mobile responsive) with just 300 lines of code.

The main features of the application are:
  • Multiple file uploader (up to 300 Mb).
  • Multiple file downloader.
  • Daily browsable functionality (it allows you to navigate on each day to see the list of files uploaded).
  • Thumbnail automatic generation using MagicScaler.
  • Automatic movie conversion via CloudConvert (this allows me to share videos between mobile devices as iPhones generate .mov files which cannot be played on Android devices).
  • Keep the existing quality of the file (full size). This means uploading huge files into the File server.
  • Cookie authentication with Policies.

You can see the flow in the image below:


Sample code for the file uploader can be found below:


To list the files, I use the File provider in ASP.NET Core which uses the static file middleware to locate static files, in my case pictures and videos.


There are quite a lot of things to do to improve the application but now anyone who uses the application can easily upload/download pictures from one of my servers which is monitored and constantly backed up so no picture gets lost. Also segregating pictures by dates helps a lot to find the one you were looking for.

I'm really impressed with the latest release of .NET Core (note that this has been built with .NET Core 2.1 and there is already a 2.2 preview version available) as the request are really fast and you can't even notice any lag even browsing with your phone on 3G which gives a nice user experience.

This is how it looks when browsing from my phone:





Source code will be available soon on my Github page as I'm trying to figure out one of the issues with https redirection which still does not work correctly and without it, it doesn't make sense.


Jordi Corbilla

Enable .NET Core 3.0 Preview 7 on Visual Studio 2019

$
0
0
I've started migrating all my ASP .NET Core 2.1 apps to the latest ASP .NET Core 3.0 but as it's still marked as a preview as of July 2019, you will have to tell Visual Studio to allow these kinds of packages. So for now, to create ASP.NET Core 3.0 projects with VS2019 (version 16.2.1), you should do the following:



  • Enable preview releases of .NET Core SDK
    • Click Tools | Options in the top menu
    • Expand Environment | .NET Core 
    • Ensure that "Use previews of the .NET Core SDK (requires restart)" checkbox is checked

Once you restart Visual Studio and install all the necessary components mentioned above, you should see the following option under Target framework in your project:


Now, to build your project and make it work is another thing, but at least the framework is there your you to play with!

Happy Coding!
Jordi

Date and Time in Console log in ASP.NET Core

$
0
0
One of the challenges that I found so far when moving towards ASP.NET Core is that the default console logger (built-in) that comes with it is not as verbose as I was expecting when displaying Dates and Times. To me this is the most obvious reason as I want to see when something happened:


As you can see in the image above, you can appreciate all the actions that are happening to the different controllers but there is no timestamp. This was raised as an issue with the aspnet team but it was deferred for later as it was not a critical piece of functionality...I guess who defines if this is critical or not. In any case, I'm the bearer of good news with the solution to your problems.

Serilog to the rescue!

With Serilog you can easily create a template so we display the message in any way we see fit. To accomplish this, here are the different components that you need (I'm already using the latest .NET Core 3.0 Preview 7):

Install the following packages:
  1. - Serilog.AspNetCore (latest, at the time of this example, was 2.1.2-dev-00028)
  2. - Serilog.Sinks.Console (latest, at the time of this example, was 3.1.2-dev-00779)
Add the following code to your Program.cs file:


And now you will see the log that you need:


Hope it helps!
Jordi

Deploy a Containerized Asp.net Core 3.1 Web Api to Kubernetes

$
0
0
It's been a while since I last wrote and it's been a very interesting journey so far. Busy as usual, it's been hard to find the time to write something meaningful here. This time I'm bringing you something I've been playing with for a while and that I hope it can bring your development to another level. In this article, I will show you how easy it is to containerize your .net core applications into a Docker container and then deploy it to a Kubernetes cluster. I'm also preparing my Rpi cluster to become my main Kubernetes cluster and I will write about it once everything is up and running.

Requirements
First, you'll need the following tools and requirements (Note that I'm using Windows 10 as my main OS, so everything will be configured around it):

1) Docker Desktop for Windows:
At the time of this article, I'm using Docker 2.2.0.5 which should be the latest one. You can get the latest from here: (https://docs.docker.com/docker-for-windows/install/) and follow the steps from the website.

2) Enable Kubernetes
Once everything is installed, open Docker preferences and select Kubernetes. Enable all the options there and restart Docker. That should bring back Docker and Kubernetes and you should see them online as in the picture below:


3) Enable File Sharing
You need to make sure you enable File Sharing so the container can see your local drive.


4) Get VS Code and Install Docker plugin (v1.1.0)
I won't go into much detail about .net core installation as I'm guessing you should all be able to build and run asp.net core applications locally (https://code.visualstudio.com/Download). In this step, you'll need to download VS Code if you haven't already and the Docker plugin which will allow us to create docker files easily as they already have a pre-built template.

5) Get VS Code Kubernetes plugin (v1.2.0)
Install the plugin to generate the YAML files that will be needed to configure the cluster. This requires the YAML component and at the time of this article, I was using version 0.8.0.


6) API Testing
In order to test the API, I recommend using Swagger (https://swagger.io/tools/open-source/getting-started/) as it will easily expose the API functions and arguments that you can test. Alternatively, you can use Postman (https://www.postman.com/downloads/) which is also a great tool for this.


Dockerizing your Web API
Now we are ready to go to our Web API project and create a docker file. Note that I'm using a complex project for this exercise and not just a simple hello world. This API has additional complex requests and it also requires a Database. It also runs a few services and exposes most of the functionality via Swagger UI. 

Go to the folder of your project and type "code ." to launch vscode on the project. Then go to view -> command palette and type "docker add" as in the picture below and select "Docker Add Files to Workspace":

Once you add the template. it will ask you about the following details:
- Application platform: ASP.NET Core.
- Operating System (container): Linux
- Ports to open: 80 and 443

The example used is a bit more complex and it includes several dependencies that need to be copied across during the creation of the container. Below you can find a dependency diagram which shows the project structure:
The application consists of a WebAPI that allows you to submit trading orders and it includes a service layer that performs the triggers internally. It also connects to a SQL Server DB which sits outside the container on my local machine. So, the connection string in the project needs to point to my local machine (192.168.1.129) plus the SQL Server port (1433). Sample connection string: ("DefaultConnection": "user id=user; password=password;Initial Catalog=TradingPlatform;Data Source=192.168.1.129,1433;").

The final docker file should look as follows:

The file mentions that it will create an asp.net core 3.1 base layer and that it will switch to the working directory app exposing ports 80 and 443. The second image includes the SDK and it will copy all our source code there. And it will run the different commands to restore the dependencies and finally to run the dotnet build command on our solution.

Now that we have our docker file, we can run it using the following commands (open a new terminal via VSCode and type "docker build -t trading-platform:v1 .". The output should look like:

If everything works as expected, the API should be in the docker container and we should be able to see that everything is running as expected (using command docker images):

Now we just need to run our container (docker run -it --rm -p 8080:80 trading-platform:v1) and test that everything is working correctly. Note that the exposed port is 8080.

As you can see in the image below, the API is up and running and I can explore it via Swagger UI on localhost:8080/swagger/api/index,html which is the port we have exposed through our container:


Deploying it to Kubernetes

Now that we have our docker container with an ASP.NET Core Web API that talks to a SQL Server DB and that will now be deployed to a Kubernetes cluster. Your Kubernetes cluster should already be up and running if you had followed the steps above during the Docker installation.

Check that your Kubernetes context is switched to docker-desktop:
You can check that the configuration is correct by using "kubectl config get-contexts":

This will allow us to select the group we want to work with. Note that I have additional clusters created on my local.

Now we need to create the deployment file (deployment.yml). Generate a new deployment.yml file in your folder and then via VS Code, type "deployment" and that will bring up the inline annotation from the Kubernetes plugin. Then fill in the gaps with the information you need to set up the cluster and pods as shown in the information below:

We will provide a deployment name "trading-platform-deployment", the name of the pod "trading-platform-pod" and the name of the docker container to use, which in our case is called "trading-platform:v1". We will then specify port 80 as the port of the container.
If everything goes well, you should see your deployment in Kubernetes ready and also the pods. We can also see that the app is running by inspecting the logs:

In order to make it publicly available, we need to provide a service layer that will give us the IP to reach the container. Now we need to generate a service.yml file (press cntrl+space to trigger the Kubernetes plugin and select the deployment service option to scaffold the service template):


The service is linked to the Pod and we specify the port we want to expose (8080) and the port in the container (80) and also the type of service which is LoadBalancer in our case. We can now see that everything is running correctly using the following commands:

And presto! Now we have our API running on a Docker Container and deployed to a Kubernetes cluster and to make it more real, the project is a complex project with services and additional libraries and also with a connection to a SQL Server DB. 

If we browse to (http://localhost:8080/swagger/api/index.html) we will be able to reach the API.

Once completed, if you want to stop the service and pod, type:
- kubectl delete service trading-platform-service
- kubectl delete pod trading-platform-deployment-6bf776f966-8xs7r

Predicting stock prices using a TensorFlow LSTM (long short-term memory) neural network for times series forecasting

$
0
0

1) Introduction

Predicting stock prices is a cumbersome task as it does not follow any specific pattern. Changes in the stock prices are purely based on supply and demand during a period of time. In order to learn the specific characteristics of a stock price, we can use deep learning to identify these patterns through machine learning. One of the most well-known networks for series forecasting is LSTM (long short-term memory) which is a Recurrent Neural Network (RNN) that is able to remember information over a long period of time, thus making them extremely useful for predicting stock prices. RNNs are well-suited to time series data and they are able to process the data step-by-step, maintaining an internal state where they cache the information they have seen so far in a summarised version. The successful prediction of a stock's future price could yield a significant profit.

2) Stock Market Data

The initial data we will use for this model is taken directly from the Yahoo Finance page which contains the latest market data on a specific stock price. To perform this operation easily using Python, we will use the yFinance library which has been built specifically for this and that it will allow us to download all the information we need on a given ticker symbol.

Below is a sample screenshot of the ticker symbol (GOOG) that we will use in this stock prediction article:

2.1) Market Info Download

To download the data info, we will need the yFinance library installed and then we will only need to perform the following operation to download all the relevant information of a given Stock using its ticker symbol.

Below is the output from the [download_market_data_info.py] file that is able to download financial data from Yahoo Finance.

C:\Users\thund\Source\Repos\stock-prediction-deep-neural-learning>python download_market_data_info.py
Info
{
"52WeekChange": 0.26037383,
"SandP52WeekChange": 0.034871936,
"address1": "1600 Amphitheatre Parkway",
"algorithm": null,
"annualHoldingsTurnover": null,
"annualReportExpenseRatio": null,
"ask": 1432.77,
"askSize": 1400,
"averageDailyVolume10Day": 2011171,
"averageVolume": 1857809,
"averageVolume10days": 2011171,
"beta": 1.068946,
"beta3Year": null,
"bid": 1432.16,
"bidSize": 3000,
"bookValue": 297.759,
"category": null,
"circulatingSupply": null,
"city": "Mountain View",
"companyOfficers": [],
"country": "United States",
"currency": "USD",
"dateShortInterest": 1592179200,
"dayHigh": 1441.19,
"dayLow": 1409.82,
"dividendRate": null,
"dividendYield": null,
"earningsQuarterlyGrowth": 0.027,
"enterpriseToEbitda": 17.899,
"enterpriseToRevenue": 5.187,
"enterpriseValue": 864533741568,
"exDividendDate": null,
"exchange": "NMS",
"exchangeTimezoneName": "America/New_York",
"exchangeTimezoneShortName": "EDT",
"expireDate": null,
"fiftyDayAverage": 1417.009,
"fiftyTwoWeekHigh": 1532.106,
"fiftyTwoWeekLow": 1013.536,
"fiveYearAverageReturn": null,
"fiveYearAvgDividendYield": null,
"floatShares": 613293304,
"forwardEps": 55.05,
"forwardPE": 26.028149,
"fromCurrency": null,
"fullTimeEmployees": 123048,
"fundFamily": null,
"fundInceptionDate": null,
"gmtOffSetMilliseconds": "-14400000",
"heldPercentInsiders": 0.05746,
"heldPercentInstitutions": 0.7062,
"industry": "Internet Content & Information",
"isEsgPopulated": false,
"lastCapGain": null,
"lastDividendValue": null,
"lastFiscalYearEnd": 1577750400,
"lastMarket": null,
"lastSplitDate": 1430092800,
"lastSplitFactor": "10000000:10000000",
"legalType": null,
"logo_url": "https://logo.clearbit.com/abc.xyz",
"longBusinessSummary": "Alphabet Inc. provides online advertising services in the United States, Europe, the Middle East, Africa, the Asia-Pacific, Canada, and Latin America. It offers performance and brand advertising services. The company operates through Google and Other Bets segments. The Google segment offers products, such as Ads, Android, Chrome, Google Cloud, Google Maps, Google Play, Hardware, Search, and YouTube, as well as technical infrastructure. It also offers digital content, cloud services, hardware devices, and other miscellaneous products and services. The Other Bets segment includes businesses, including Access, Calico, CapitalG, GV, Verily, Waymo, and X, as well as Internet and television services. Alphabet Inc. was founded in 1998 and is headquartered in Mountain View, California.",
"longName": "Alphabet Inc.",
"market": "us_market",
"marketCap": 979650805760,
"maxAge": 1,
"maxSupply": null,
"messageBoardId": "finmb_29096",
"morningStarOverallRating": null,
"morningStarRiskRating": null,
"mostRecentQuarter": 1585612800,
"navPrice": null,
"netIncomeToCommon": 34522001408,
"nextFiscalYearEnd": 1640908800,
"open": 1411.1,
"openInterest": null,
"payoutRatio": 0,
"pegRatio": 4.38,
"phone": "650-253-0000",
"previousClose": 1413.61,
"priceHint": 2,
"priceToBook": 4.812112,
"priceToSalesTrailing12Months": 5.87754,
"profitMargins": 0.20712,
"quoteType": "EQUITY",
"regularMarketDayHigh": 1441.19,
"regularMarketDayLow": 1409.82,
"regularMarketOpen": 1411.1,
"regularMarketPreviousClose": 1413.61,
"regularMarketPrice": 1411.1,
"regularMarketVolume": 1084440,
"revenueQuarterlyGrowth": null,
"sector": "Communication Services",
"sharesOutstanding": 336161984,
"sharesPercentSharesOut": 0.0049,
"sharesShort": 3371476,
"sharesShortPreviousMonthDate": 1589500800,
"sharesShortPriorMonth": 3462105,
"shortName": "Alphabet Inc.",
"shortPercentOfFloat": null,
"shortRatio": 1.9,
"startDate": null,
"state": "CA",
"strikePrice": null,
"symbol": "GOOG",
"threeYearAverageReturn": null,
"toCurrency": null,
"totalAssets": null,
"tradeable": false,
"trailingAnnualDividendRate": null,
"trailingAnnualDividendYield": null,
"trailingEps": 49.572,
"trailingPE": 28.904415,
"twoHundredDayAverage": 1352.9939,
"volume": 1084440,
"volume24Hr": null,
"volumeAllCurrencies": null,
"website": "http://www.abc.xyz",
"yield": null,
"ytdReturn": null,
"zip": "94043"
}

ISIN
-

Major Holders
01
0 5.75% % of Shares Held by All Insider
1 70.62% % of Shares Held by Institutions
2 74.93% % of Float Held by Institutions
33304 Number of Institutions Holding Shares

Institutional Holders
Holder Shares Date Reported % Out Value
0 Vanguard Group, Inc. (The) 23162950 2020-03-30 0.0687 26934109889
1 Blackrock Inc. 20264225 2020-03-30 0.0601 23563443472
2 Price (T.Rowe) Associates Inc 12520058 2020-03-30 0.0371 14558448642
3 State Street Corporation 11814026 2020-03-30 0.0350 13737467573
4 FMR, LLC 8331868 2020-03-30 0.0247 9688379429
5 Capital International Investors 4555880 2020-03-30 0.0135 5297622822
6 Geode Capital Management, LLC 4403934 2020-03-30 0.0131 5120938494
7 Northern Trust Corporation 4017009 2020-03-30 0.0119 4671018235
8 JP Morgan Chase & Company 3707376 2020-03-30 0.0110 4310973886
9 AllianceBernstein, L.P. 3483382 2020-03-30 0.0103 4050511423

Dividents
Series([], Name: Dividends, dtype: int64)

Splits
Date
2014-03-27 2.002
2015-04-27 1.000
Name: Stock Splits, dtype: float64

Actions
Dividends Stock Splits
Date
2014-03-27 0.0 2.002
2015-04-27 0.0 1.000

Calendar
Empty DataFrame
Columns: []
Index: [Earnings Date, Earnings Average, Earnings Low, Earnings High, Revenue Average, Revenue Low, Revenue High]

Recommendations
Firm To Grade From Grade Action
Date
2012-03-14 15:28:00 Oxen Group Hold init
2012-03-28 06:29:00 Citigroup Buy main
2012-04-03 08:45:00 Global Equities Research Overweight main
2012-04-05 06:34:00 Deutsche Bank Buy main
2012-04-09 06:03:00 Pivotal Research Buy main
2012-04-10 11:32:00 UBS Buy main
2012-04-13 06:16:00 Deutsche Bank Buy main
2012-04-13 06:18:00 Jefferies Buy main
2012-04-13 06:37:00 PiperJaffray Overweight main
2012-04-13 06:38:00 Goldman Sachs Neutral main
2012-04-13 06:41:00 JP Morgan Overweight main
2012-04-13 06:51:00 Oppenheimer Outperform main
2012-04-13 07:13:00 Benchmark Hold main
2012-04-13 08:46:00 BMO Capital Outperform main
2012-04-16 06:52:00 Hilliard Lyons Buy main
2012-06-06 06:17:00 Deutsche Bank Buy main
2012-06-06 06:56:00 JP Morgan Overweight main
2012-06-22 06:15:00 Citigroup Buy main
2012-07-13 05:57:00 Wedbush Neutral init
2012-07-17 09:33:00 Outperform main
2012-07-20 06:43:00 Benchmark Hold main
2012-07-20 06:54:00 Deutsche Bank Buy main
2012-07-20 06:59:00 Bank of America Buy main
2012-08-13 05:49:00 Morgan Stanley Overweight Equal-Weight up
2012-09-17 06:07:00 Global Equities Research Overweight main
2012-09-21 06:28:00 Cantor Fitzgerald Buy init
2012-09-24 06:11:00 Citigroup Buy main
2012-09-24 09:05:00 Pivotal Research Buy main
2012-09-25 07:20:00 Capstone Buy main
2012-09-26 05:48:00 Canaccord Genuity Buy main
... ... ... ... ...
2017-10-27 19:29:31 UBS Buy main
2018-02-02 14:04:52 PiperJaffray Overweight Overweight main
2018-04-24 11:43:49 JP Morgan Overweight Overweight main
2018-04-24 12:24:37 Deutsche Bank Buy Buy main
2018-05-05 14:00:37 B. Riley FBR Buy main
2018-07-13 13:49:13 Cowen & Co. Outperform Outperform main
2018-07-24 11:50:55 Cowen & Co. Outperform Outperform main
2018-07-24 13:33:47 Raymond James Outperform Outperform main
2018-10-23 11:18:00 Deutsche Bank Buy Buy main
2018-10-26 15:17:08 Raymond James Outperform Outperform main
2019-01-23 12:55:04 Deutsche Bank Buy Buy main
2019-02-05 12:55:12 Deutsche Bank Buy Buy main
2019-02-05 13:18:47 PiperJaffray Overweight Overweight main
2019-05-15 12:34:54 Deutsche Bank Buy main
2019-10-23 12:58:59 Credit Suisse Outperform main
2019-10-29 11:58:09 Raymond James Outperform main
2019-10-29 14:15:40 Deutsche Bank Buy main
2019-10-29 15:48:29 UBS Buy main
2020-01-06 11:22:07 Pivotal Research Buy Hold up
2020-01-17 13:01:48 UBS Buy main
2020-02-04 12:26:56 Piper Sandler Overweight main
2020-02-04 12:41:00 Raymond James Outperform main
2020-02-04 14:00:36 Deutsche Bank Buy main
2020-02-06 11:34:20 CFRA Strong Buy main
2020-03-18 13:52:51 JP Morgan Overweight main
2020-03-30 13:26:16 UBS Buy main
2020-04-17 13:01:41 Oppenheimer Outperform main
2020-04-20 19:29:50 Credit Suisse Outperform main
2020-04-29 14:01:51 UBS Buy main
2020-05-05 12:44:16 Deutsche Bank Buy main

[219 rows x 4 columns]

Earnings
Empty DataFrame
Columns: [Open, High, Low, Close, Adj Close, Volume]
Index: []

Quarterly Earnings
Empty DataFrame
Columns: [Open, High, Low, Close, Adj Close, Volume]
Index: []

Financials
Empty DataFrame
Columns: [Open, High, Low, Close, Adj Close, Volume]
Index: []

Quarterly Financials
Empty DataFrame
Columns: [Open, High, Low, Close, Adj Close, Volume]
Index: []

Balance Sheet
Empty DataFrame
Columns: [Open, High, Low, Close, Adj Close, Volume]
Index: []

Quarterly Balance Sheet
Empty DataFrame
Columns: [Open, High, Low, Close, Adj Close, Volume]
Index: []

Balancesheet
Empty DataFrame
Columns: [Open, High, Low, Close, Adj Close, Volume]
Index: []

Quarterly Balancesheet
Empty DataFrame
Columns: [Open, High, Low, Close, Adj Close, Volume]
Index: []

Cashflow
Empty DataFrame
Columns: [Open, High, Low, Close, Adj Close, Volume]
Index: []

Quarterly Cashflow
Empty DataFrame
Columns: [Open, High, Low, Close, Adj Close, Volume]
Index: []

Sustainability
None

Options
('2020-07-02', '2020-07-10', '2020-07-17', '2020-07-24', '2020-07-31', '2020-08-07', '2020-08-21', '2020-09-18', '2020-11-20', '2020-12-01', '2020-12-18', '2021-01-15', '2021-06-18', '2022-01-21', '2022-06-17')

The data has a JSON document that we could use later on to create our Security Master if we ever wanted to store this data somewhere to keep track of the Securities we are going to trade with. As the data could come with different fields, my suggestion is to store them on a Data Lake so we can build it from multiple sources without having to worry too much about the way the data is structured.

2.2) Market Data Download

The previous step helps us to identify several characteristics of a given ticker symbol so we can use its properties to define some of the charts I'm showing below. Note that the yFinance library only requires the stock to download via ticker symbol, the start date and end date of the period we want to get. Additionally, we can also specify the granularity of the data using the interval parameter. By default, the interval is 1 day and this is the one I will use for my training.

To download the data we can use the following command:

start=pd.to_datetime('2004-08-01')
stock= ['GOOG']
data=yf.download(stock, start=start, end=datetime.date.today())
print(data)

And the sample output:

C:\Users\thund\Source\Repos\stock-prediction-deep-neural-learning>python download_market_data.py
[*********************100%***********************] 1 of 1 completed
Open High Low Close Adj Close Volume
Date
2004-08-19 49.813286 51.835709 47.800831 49.982655 49.982655 44871300
2004-08-20 50.316402 54.336334 50.062355 53.952770 53.952770 22942800
2004-08-23 55.168217 56.528118 54.321388 54.495735 54.495735 18342800
2004-08-24 55.412300 55.591629 51.591621 52.239193 52.239193 15319700
2004-08-25 52.284027 53.798351 51.746044 52.802086 52.802086 9232100
2004-08-26 52.279045 53.773445 52.134586 53.753517 53.753517 7128600
2004-08-27 53.848164 54.107193 52.647663 52.876804 52.876804 6241200
2004-08-30 52.443428 52.548038 50.814533 50.814533 50.814533 5221400
2004-08-31 50.958992 51.661362 50.889256 50.993862 50.993862 4941200
2004-09-01 51.158245 51.292744 49.648903 49.937820 49.937820 9181600
2004-09-02 49.409801 50.993862 49.285267 50.565468 50.565468 15190400
2004-09-03 50.286514 50.680038 49.474556 49.818268 49.818268 5176800
2004-09-07 50.316402 50.809555 49.619015 50.600338 50.600338 5875200
2004-09-08 50.181908 51.322632 50.062355 50.958992 50.958992 5009200
2004-09-09 51.073563 51.163227 50.311420 50.963974 50.963974 4080900
2004-09-10 50.610302 53.081039 50.460861 52.468334 52.468334 8740200
2004-09-13 53.115910 54.002586 53.031227 53.549286 53.549286 7881300
2004-09-14 53.524376 55.790882 53.195610 55.536835 55.536835 10880300
2004-09-15 55.073570 56.901718 54.894241 55.790882 55.790882 10763900
2004-09-16 55.960247 57.683788 55.616535 56.772205 56.772205 9310200
2004-09-17 56.996365 58.525631 56.562988 58.525631 58.525631 9517400
2004-09-20 58.256641 60.572956 58.166977 59.457142 59.457142 10679200
2004-09-21 59.681301 59.985161 58.535595 58.699978 58.699978 7263000
2004-09-22 58.480801 59.611561 58.186901 58.968971 58.968971 7617100
2004-09-23 59.198112 61.086033 58.291508 60.184414 60.184414 8576100
2004-09-24 60.244190 61.818291 59.656395 59.691261 59.691261 9166700
2004-09-27 59.556767 60.214302 58.680054 58.909195 58.909195 7099600
2004-09-28 60.423519 63.462128 59.880554 63.193138 63.193138 17009400
2004-09-29 63.113434 67.257904 62.879314 65.295258 65.295258 30661400
2004-09-30 64.707458 65.902977 64.259140 64.558022 64.558022 13823300
... ... ... ... ... ... ...
2020-05-19 1386.996948 1392.000000 1373.484985 1373.484985 1373.484985 1280600
2020-05-20 1389.579956 1410.420044 1387.250000 1406.719971 1406.719971 1655400
2020-05-21 1408.000000 1415.489990 1393.449951 1402.800049 1402.800049 1385000
2020-05-22 1396.709961 1412.760010 1391.829956 1410.420044 1410.420044 1309400
2020-05-26 1437.270020 1441.000000 1412.130005 1417.020020 1417.020020 2060600
2020-05-27 1417.250000 1421.739990 1391.290039 1417.839966 1417.839966 1685800
2020-05-28 1396.859985 1440.839966 1396.000000 1416.729980 1416.729980 1692200
2020-05-29 1416.939941 1432.569946 1413.349976 1428.920044 1428.920044 1838100
2020-06-01 1418.390015 1437.959961 1418.000000 1431.819946 1431.819946 1217100
2020-06-02 1430.550049 1439.609985 1418.829956 1439.219971 1439.219971 1278100
2020-06-03 1438.300049 1446.552002 1429.776978 1436.380005 1436.380005 1256200
2020-06-04 1430.400024 1438.959961 1404.729980 1412.180054 1412.180054 1484300
2020-06-05 1413.170044 1445.050049 1406.000000 1438.390015 1438.390015 1734900
2020-06-08 1422.339966 1447.989990 1422.339966 1446.609985 1446.609985 1404200
2020-06-09 1445.359985 1468.000000 1443.209961 1456.160034 1456.160034 1409200
2020-06-10 1459.540039 1474.259033 1456.270020 1465.849976 1465.849976 1525200
2020-06-11 1442.479980 1454.474976 1402.000000 1403.839966 1403.839966 1991300
2020-06-12 1428.489990 1437.000000 1386.020020 1413.180054 1413.180054 1944200
2020-06-15 1390.800049 1424.800049 1387.920044 1419.849976 1419.849976 1503900
2020-06-16 1445.219971 1455.020020 1425.900024 1442.719971 1442.719971 1709200
2020-06-17 1447.160034 1460.000000 1431.380005 1451.119995 1451.119995 1548300
2020-06-18 1449.160034 1451.410034 1427.010010 1435.959961 1435.959961 1581900
2020-06-19 1444.000000 1447.800049 1421.349976 1431.719971 1431.719971 3157900
2020-06-22 1429.000000 1452.750000 1423.209961 1451.859985 1451.859985 1542400
2020-06-23 1455.640015 1475.941040 1445.239990 1464.410034 1464.410034 1429800
2020-06-24 1461.510010 1475.420044 1429.750000 1431.969971 1431.969971 1756000
2020-06-25 1429.900024 1442.900024 1420.000000 1441.329956 1441.329956 1230500
2020-06-26 1431.390015 1433.449951 1351.989990 1359.900024 1359.900024 4267700
2020-06-29 1358.180054 1395.599976 1347.010010 1394.969971 1394.969971 1810200
2020-06-30 1390.439941 1418.650024 1383.959961 1413.609985 1413.609985 2041600

[3994 rows x 6 columns]

Note that is important to mention the start date correctly just to ensure we are collecting data. If we don't do that we might end up having some NaN variables that could affect the output of our training.

3) Deep Learning Model

3.1) Training and Validation Data

Now that we have the data that we want to use, we need to define what defines our training and validation data. As stocks could vary depending on the dates, the function I have created requires 3 basic arguments:

  • Ticker Symbol: GOOG
  • Start Date: Date as to when they started, in this case, it was 2004-Aug-01.
  • Validation Date: Date as to when we want the validation to be considered. In this case, we specify 2017-01-01 as our data point.

Note that you will need to have configured TensorFlow, Keras, and a GPU in order to run the samples below.

In this exercise, I'm only interested in the closing price which is the standard benchmark regarding stocks or securities.

Below you can find the chart with the division we will create between Training Data and Validation Data:

Also, the histogram showing the distribution of the prices:

3.2) Data Normalization

In order to normalize the data, we need to scale it between 0 and 1 so we talk on a common scale. To accomplish this, we can use the preprocessing tool MinMaxScaler as seen below:

min_max=MinMaxScaler(feature_range=(0, 1))
train_scaled=min_max.fit_transform(training_data)

3.3) Adding Timesteps

LSTM network needs the data imported as a 3D array. To translate this 2D array into a 3D one, we use a short timestep to loop through the data and create smaller partitions and feed them into the model. The final array is then reshaped into training samples, x number of timesteps, and 1 feature per step. The code below represents this concept:

time_steps=3
foriinrange(time_steps, train_scaled.shape[0]):
x_train.append(train_scaled[i-time_steps:i])
y_train.append(train_scaled[i, 0])

We have implemented a time step of 3 days. Using this technique, we allow our network to look back 3 days on our data to predict the subsequent day). The figure below represents how our implementation uses this concept and how the first 3 samples for Close price would generate the 4th sample and so on. This will generate a matrix of shape (3,1), 3 being the time steps, and 1 the number of features (Close price).

3.4) Creation of the deep learning model LSTM

To create this model, you will need to have TensorFlowTensorFlow-Gpu and Keras installed in order for this to run. The code for this model can be seen below and the explanation for each layer is also defined below:

defcreate_long_short_term_memory_model(x_train):
model=Sequential()
# 1st layer with Dropout regularisation
# * units = add 100 neurons is the dimensionality of the output space
# * return_sequences = True to stack LSTM layers so the next LSTM layer has a three-dimensional sequence input
# * input_shape => Shape of the training dataset
model.add(LSTM(units=100, return_sequences=True, input_shape=(x_train.shape[1], 1)))
# 20% of the layers will be dropped
model.add(Dropout(0.2))
# 2nd LSTM layer
# * units = add 50 neurons is the dimensionality of the output space
# * return_sequences = True to stack LSTM layers so the next LSTM layer has a three-dimensional sequence input
model.add(LSTM(units=50, return_sequences=True))
# 20% of the layers will be dropped
model.add(Dropout(0.2))
# 3rd LSTM layer
# * units = add 50 neurons is the dimensionality of the output space
# * return_sequences = True to stack LSTM layers so the next LSTM layer has a three-dimensional sequence input
model.add(LSTM(units=50, return_sequences=True))
# 50% of the layers will be dropped
model.add(Dropout(0.5))
# 4th LSTM layer
# * units = add 50 neurons is the dimensionality of the output space
model.add(LSTM(units=50))
# 50% of the layers will be dropped
model.add(Dropout(0.5))
# Dense layer that specifies an output of one unit
model.add(Dense(units=1))
model.summary()
tf.keras.utils.plot_model(model, to_file=os.path.join(project_folder, 'model_lstm.png'), show_shapes=True,
show_layer_names=True)
returnmodel

The rendered model can be seen in the image below, producing a model with more than 100k trainable parameters. 

Layer (type)                 Output Shape              Param #
=================================================================
lstm_1 (LSTM) (None, 60, 100) 40800
_________________________________________________________________
dropout_1 (Dropout) (None, 60, 100) 0
_________________________________________________________________
lstm_2 (LSTM) (None, 60, 50) 30200
_________________________________________________________________
dropout_2 (Dropout) (None, 60, 50) 0
_________________________________________________________________
lstm_3 (LSTM) (None, 60, 50) 20200
_________________________________________________________________
dropout_3 (Dropout) (None, 60, 50) 0
_________________________________________________________________
lstm_4 (LSTM) (None, 50) 20200
_________________________________________________________________
dropout_4 (Dropout) (None, 50) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 51
=================================================================
Total params: 111,451
Trainable params: 111,451
Non-trainable params: 0

Once we have defined the model, we need to specify the metrics we want to use to track how well our model is behaving and also the kind of optimizer we want to use for our training. I have also defined the patience I want my model to have and what is the rule defined for it.

defined_metrics= [
tf.keras.metrics.MeanSquaredError(name='MSE')
]

callback=tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3, mode='min', verbose=1)

model.compile(optimizer='adam', loss='mean_squared_error', metrics=defined_metrics)
history=model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size, validation_data=(x_test, y_test),
callbacks=[callback])

This model is slightly fined tuned to reach the lowest validation loss. In this example, we reach a validation loss of 0.14% with an MSE (Mean Square Error) of 0.14% which is relatively good, providing us with a very accurate result.

The training result can be seen below:

Train on 3055 samples, validate on 881 samples
Epoch 1/100
2020-07-11 15:15:34.557035: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
3112/3112 [==============================] - 19s 6ms/sample - loss: 0.0451 - MSE: 0.0451 - val_loss: 0.0068 - val_MSE: 0.0068
Epoch 2/100
3112/3112 [==============================] - 4s 1ms/sample - loss: 0.0088 - MSE: 0.0088 - val_loss: 0.0045 - val_MSE: 0.0045
Epoch 3/100
3112/3112 [==============================] - 5s 1ms/sample - loss: 0.0062 - MSE: 0.0062 - val_loss: 0.0032 - val_MSE: 0.0032
Epoch 4/100
3112/3112 [==============================] - 5s 1ms/sample - loss: 0.0051 - MSE: 0.0051 - val_loss: 0.0015 - val_MSE: 0.0015
Epoch 5/100
3112/3112 [==============================] - 7s 2ms/sample - loss: 0.0045 - MSE: 0.0045 - val_loss: 0.0013 - val_MSE: 0.0013
Epoch 6/100
3112/3112 [==============================] - 5s 2ms/sample - loss: 0.0045 - MSE: 0.0045 - val_loss: 0.0013 - val_MSE: 0.0013
Epoch 7/100
3112/3112 [==============================] - 5s 2ms/sample - loss: 0.0045 - MSE: 0.0045 - val_loss: 0.0015 - val_MSE: 0.0015
Epoch 8/100
3112/3112 [==============================] - 5s 1ms/sample - loss: 0.0040 - MSE: 0.0040 - val_loss: 0.0015 - val_MSE: 0.0015
Epoch 9/100
3112/3112 [==============================] - 5s 1ms/sample - loss: 0.0039 - MSE: 0.0039 - val_loss: 0.0014 - val_MSE: 0.0014
Epoch 00009: early stopping
saving weights
plotting loss
plotting MSE
display the content of the model
886/1 - 0s - loss: 0.0029 - MSE: 0.0014
loss : 0.0014113364930413916
MSE : 0.0014113366

3.5) Making predictions happen

Now it is time to prepare our testing data and send it through our deep-learning model to obtain the predictions we are trying to get.

First, we need to import the test data using the same approach we used for the training data using the time steps:

# Testing Data Transformation
x_test= []
y_test= []
foriinrange(time_steps, test_scaled.shape[0]):
x_test.append(test_scaled[i-time_steps:i])
y_test.append(test_scaled[i, 0])

x_test, y_test=np.array(x_test), np.array(y_test)
x_test=np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))

Now we can call the predict method which will allow us to generate the stock prediction based on the training done over the training data. As a result, we will generate a CSV file that contains the result of the prediction and also a chart that shows what's the real vs the estimation.

With the validation loss and validation MSE metrics:

 

4) Usage

This has been built using Python 3.6.8 version.

Download the source code and install the following packages:

C:\Users\thund\Source\Repos\stock-prediction-deep-neural-learning>pip list
Package Version
-------------------- ---------
absl-py 0.8.0
astor 0.8.0
astroid 2.3.3
backcall 0.1.0
certifi 2020.6.20
chardet 3.0.4
colorama 0.4.1
cycler 0.10.0
decorator 4.4.0
Django 2.2.6
gast 0.2.2
google-pasta 0.1.7
graphviz 0.13.2
grpcio 1.23.0
h5py 2.10.0
idna 2.10
image 1.5.27
imageio 2.6.1
imbalanced-learn 0.5.0
imblearn 0.0
ipython 7.8.0
ipython-genutils 0.2.0
isort 4.3.21
jedi 0.15.1
joblib 0.14.0
Keras 2.3.1
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
kiwisolver 1.1.0
lazy-object-proxy 1.4.3
lxml 4.5.1
Markdown 3.1.1
matplotlib 3.1.1
mccabe 0.6.1
multitasking 0.0.9
networkx 2.4
numpy 1.17.2
opencv-python 4.1.1.26
opt-einsum 3.1.0
pandas 0.24.0
pandas-datareader 0.5.0
parso 0.5.1
pickleshare 0.7.5
Pillow 6.2.0
pip 20.1.1
prompt-toolkit 2.0.10
protobuf 3.9.2
pydot 1.4.1
Pygments 2.4.2
pylint 2.4.4
pyparsing 2.4.2
python-dateutil 2.8.0
pytz 2019.2
PyWavelets 1.1.1
PyYAML 5.1.2
requests 2.24.0
requests-file 1.5.1
requests-ftp 0.3.1
scikit-image 0.16.2
scikit-learn 0.21.3
scipy 1.3.1
seaborn 0.9.0
setuptools 41.2.0
six 1.12.0
sqlparse 0.3.0
tensorboard 2.0.0
tensorflow 2.0.0
tensorflow-estimator 2.0.1
tensorflow-gpu 2.0.0
termcolor 1.1.0
traitlets 4.3.3
typed-ast 1.4.0
urllib3 1.25.9
wcwidth 0.1.7
Werkzeug 0.16.0
wheel 0.33.6
wrapt 1.11.2
xlrd 1.2.0
yfinance 0.1.54

Then edit the file "stock_prediction_deep_learning.py" to include the Stock you want to use and the relevant dates and execute:

python stock_prediction_deep_learning.py

All source code can be found here:

https://github.com/JordiCorbilla/stock-prediction-deep-neural-learning

GitHub Actions to publish .net core artifacts

$
0
0

In this article, you'll find how easy and simple it is to generate your .net core artifacts (binaries) using Github Actions.

Adding your action

Github offers a set of predefined actions that you can use to leverage the needs of your deployment machine. Maybe you need to generate artifacts for a Windows machine or Mac or Linux and you want to generate all of them without having to build things separately. The easiest way to use continuous integration through Github Actions is via a workflow configuration file. Go to Actions and click new workflow:

Select the netcore action to build your .net core application:

This will open up the following YAML configuration file that we can edit to configure it according to the needs of our project:

Configuring your workflow file

The file below specifies building my .net core application using a specific folder and then I publish the binaries that will be available in the action's artifacts section. The package has been configured for Windows x64, .net core 3.1.6 and the package is self-contained (meaning that the .net core libraries will become part of the artifact).

name: my Project

on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

defaults:
run:
working-directory: myASPNETCoreApp

jobs:
build:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2
- name: Setup .NET Core
uses: actions/setup-dotnet@v1
with:
dotnet-version: 3.1.302
- name: Install dependencies
run: dotnet restore
- name: Build Project
run: dotnet build --configuration Release --no-restore
- name: Test Project
run: dotnet test --no-restore --verbosity normal
- name: Publish
run: dotnet publish --configuration Debug --framework netcoreapp3.1 --runtime win-x64 --self-contained true
- name: Upload a Build Artifact
uses: actions/upload-artifact@v2.1.1
with:
name: myproject-win-64
path: /home/runner/work/myproject/myproject/myASPNETCoreApp/bin/Debug/netcoreapp3.1/win-x64/publish/

Once completed, you will see the actions taking place on every commit. Below is a sample of execution for different changes in my project. Note that in the image below, the artifacts (binaries) of my project are attached to the output of the action's workflow.

We can click on the commit and explore the output of the build and drill into the build log for more details:

If we drill into the build log, we will be able to find the location of the machine where the artifacts are generated so we can zip them up. I've highlighted it in red and it should match the path mentioned in my YAML file.

One thing to take into consideration is that this is not for free and there is a cost implied if you go over the limit. This build happens on a Linux machine (as defined in the YAML file) and there is a cost for using this machine. There is a very healthy limit and you can specify the amount of money you want to spend. The default is $0, so there is no risk of getting billed accidentally. More info here.

Source code can be find here:


https://github.com/JordiCorbilla/github-actions


Hosting an ASP.NET Core 5.0 Web API in a Windows Service

$
0
0

One of the coolest things I've seen with the .NET ecosystem is that you can host your Web API in a windows service. In this short article, you can find the source code for the template that I have built and that you can use to power your projects up.

The template is very simple and it provides a basic skeleton for an ASP.NET Core 5.0 Web API project with hosted services and with the possibility to host the entire solution in a Windows Service. 

You can find the source code here: https://github.com/JordiCorbilla/WindowsServiceHost

The project contains the basic configuration with the latest release available of .NET (at the time of writing this post) and how to easily operate with it and start working with it.

The idea behind the project is to be able to host the end-point without requiring IIS as you might have not installed it on your server and you want something a bit more powerful and scalable.

Let me know what you think and if you have further ideas to improve the template. This one is very basic and it uses ASP.NET Core 5.0 Web API, Worker services, HTTP/HTTPS, and the swagger UI with the following packages installed:

Packages Used

<PackageReferenceInclude="Serilog"Version="2.10.0" />
<PackageReferenceInclude="Swashbuckle.AspNetCore"Version="5.6.3" />
<PackageReferenceInclude="Swashbuckle.AspNetCore.Annotations"Version="5.6.3" />
<PackageReferenceInclude="Microsoft.Extensions.Hosting.WindowsServices"Version="5.0.0" />
<PackageReferenceInclude="Serilog.AspNetCore"Version="3.4.0" />
<PackageReferenceInclude="Serilog.Sinks.Console"Version="3.1.1" />
<PackageReferenceInclude="Swashbuckle.AspNetCore.Swagger"Version="5.6.3" />
<PackageReferenceInclude="Microsoft.AspNetCore.Mvc.NewtonsoftJson"Version="5.0.0" />

The important one to mention here is the "Microsoft.Extensions.Hosting.WindowsServices" which is the one that will allow us to use the "UseWindowsService()" in the IHostBuilder interface.

Additional work to read:

Web scraping with Python 3.7, Docker, and Kubernetes

$
0
0

 

Web scraping with Python 3.7, Docker, and Kubernetes

Web scraping scripts to extract financial data. In a nutshell, this method can help you to get any information that it's available on any website using the BeautifulSoup library and python. The idea is to use this library to parse any DOM and get the data that we are interested in. Getting data from a list for example is a very simple job. Things get more interesting when we want to download more complex data like a table.

1) Installing BeautifulSoup

pip install beautifulsoup4
or
python -m pip install beautifulsoup4

Output:

python -m pip install beautifulsoup4
Collecting beautifulsoup4
Downloading beautifulsoup4-4.9.3-py3-none-any.whl (115 kB)
|████████████████████████████████| 115 kB 3.3 MB/s
Collecting soupsieve>1.2; python_version >= "3.0"
Downloading soupsieve-2.2-py3-none-any.whl (33 kB)
Installing collected packages: soupsieve, beautifulsoup4
Successfully installed beautifulsoup4-4.9.3 soupsieve-2.2

2) Simple Example (scraping names)

The following script example tries to generate random user names via web scraping. Firstly, we locate a website that contains a list of name that we can download and then we use this list to generate user names. To do this, we can browse for any of the top 1000 girl names and see any of the links available:

In our case, we found this url which seems pretty good for what we need. Upon inspecting the names, we can see that the DOM is pretty straightforward and each name is placed under 'li' tags and the whole group under an 'ol' tag:

To get the list of names on a usable format using python, we can use the BeautifulSoup library and locate the specific tags we want (ol and then li, and print the content of it). The code below shows how this can be done:

# Let's create a random user name generator using web scraping.
# first we need to download a list of names to work on.
# the following url contains 1000 girl names and we will download them using web scraping.
url='https://family.disney.com/articles/1000-most-popular-girl-names/'
page=requests.get(url)
content=page.content
soup=BeautifulSoup(content, 'html.parser')

# Find the first 'ol' that contains the list of items
list=soup.find('ol')
# Each name in the list is stored under a 'li' item
list_names=list.find_all('li')
# Now print all the names that we have scraped from the web site
fornameinlist_names:
print(name.get_text())

With a little tweak, we can easily generate usernames based on this data or whatever you want to do with it.

# Generate the following sequence 
# abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!()@$^&*[]
chars=string.ascii_letters+string.digits+'!()@$^&*[]'
random.seed= (os.urandom(1024))

fornameinlist_names:
extra_digits=''.join(random.choice(string.digits))
extra_chars=''.join(random.choice(chars) foriinrange(8))
username=name.get_text().lower() +extra_digits+extra_chars
print(username)

3) Complex Example (scraping financial information)

One of the most interesting uses for this technology is the ability to download large amounts of data that are table-based. This example tries to download the balance sheet from one of the stocks in Yahoo Finance. Imagine that we want to download the balance sheet of TSLA (if you want to download the data, you need to become a premium subscriber and they have made it difficult to perform web scraping). To perform this operation, we need to look at the way the table is created (a bunch of div tags) and how each row is composed (classes, ids) so they are easily identifiable.

Balance Sheet: https://finance.yahoo.com/quote/TSLA/balance-sheet?p=TSLA&_guc_consent_skip=1596652371

importrequests
frombs4importBeautifulSoup

# Download Balance Sheet table from TSLA
url='https://finance.yahoo.com/quote/TSLA/balance-sheet?p=TSLA'
page=requests.get(url)
content=page.content
soup=BeautifulSoup(content, 'html.parser')

cash_balance= {}

# Search for the main DIV that encloses the balance sheet table
main_content=soup.find_all('div', class_='M(0) Whs(n) BdEnd Bdc($seperatorColor) D(itb)')
fordivinmain_content:
# Look for each DIV that encloses every single row
sub_div=div.find_all('div', class_='D(tbr) fi-row Bgc($hoverBgColor):h')
forsubinsub_div:
# Select the first column as the index of our dictionary and select the second column as the data to store (2019)
cash_balance[sub.get_text(separator="|").split("|")[0]] =sub.get_text(separator="|").split("|")[1]
#print(sub.get_text())

The final result of the execution of the code above lets us produce the desired output, scraping the data from the Yahoo Finance page for the TSLA ticker:

Containerizing the script with Docker and Kubernetes

In order to make the script easily deployable, we'll create a Flask service that will host the retrieval of the cash balances and it will be all contained in a docker image.

1) Create the Flask service

importrequests
frombs4importBeautifulSoup
fromflaskimportFlask, jsonify
server=Flask(__name__)

@server.route("/")
defcash_balance_get():
# Download Balance Sheet table from TSLA
url='https://finance.yahoo.com/quote/TSLA/balance-sheet?p=TSLA'
page=requests.get(url)
content=page.content
soup=BeautifulSoup(content, 'html.parser')

cash_balance= {}

# Search for the main DIV that encloses the balance sheet table
main_content=soup.find_all('div', class_='M(0) Whs(n) BdEnd Bdc($seperatorColor) D(itb)')
fordivinmain_content:
# Look for each DIV that encloses every single row
sub_div=div.find_all('div', class_='D(tbr) fi-row Bgc($hoverBgColor):h')
forsubinsub_div:
# Select the first column as the index of our dictionary and select the second column as the data to store (2019)
cash_balance[sub.get_text(separator="|").split("|")[0]] =sub.get_text(separator="|").split("|")[1]
#print(sub.get_text())

returnjsonify(cash_balance)

if__name__=="__main__":
server.run(host='0.0.0.0')

If we run this code and try to get to http://localhost:5000, we'll get the following response:

{"Capital Lease Obligations":"1,540,000","Common Stock Equity":"22,225,000","Invested Capital":"33,964,000","Net Debt":"-","Net Tangible Assets":"21,705,000","Ordinary Shares Number":"960,000","Share Issued":"960,000","Tangible Book Value":"21,705,000","Total Assets":"52,148,000","Total Capitalization":"31,832,000","Total Debt":"13,279,000","Total Equity Gross Minority Interest":"23,075,000","Total Liabilities Net Minority Interest":"29,073,000","Working Capital":"12,469,000"}

The output of the execution can be seen below:

runfile('C:/Users/thund/Source/Repos/web-scraping/web_scraping_yahoo_finance_balance_sheet_server.py', wdir='C:/Users/thund/Source/Repos/web-scraping')
* Serving Flask app "web_scraping_yahoo_finance_balance_sheet_server" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
127.0.0.1 - - [03/Apr/2021 16:00:03] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [03/Apr/2021 16:00:03] "GET /favicon.ico HTTP/1.1" 404 -

2) Preparing the docker file

# set base image (host OS)
FROM python:3.7

# set the working directory in the container
WORKDIR /code

# copy the dependencies file to the working directory
COPY web_scraping_packages.txt .

# install dependencies
RUN pip install -r web_scraping_packages.txt

# copy the script to the working directory
COPY web_scraping_yahoo_finance_balance_sheet_server.py .

# command to run on container start
CMD [ "python", "./web_scraping_yahoo_finance_balance_sheet_server.py" ]

Include the packages to be installed as part of the docker image creation (web_scraping_packages.txt):

requests==2.22.0
beautifulsoup4==4.7.1
Flask==1.1.1

Create the image:

C:\Source\Repos\web-scraping>docker build -t web-scraping:v1 .
[+] Building 547.8s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 526B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/python:3.7 42.6s
=> [1/5] FROM docker.io/library/python:3.7@sha256:8b3d5242ba72ac32785362c4ade75b61ce941bd454f8e1c585270736c991 495.1s
=> => resolve docker.io/library/python:3.7@sha256:8b3d5242ba72ac32785362c4ade75b61ce941bd454f8e1c585270736c9916f 0.0s
=> => sha256:8b3d5242ba72ac32785362c4ade75b61ce941bd454f8e1c585270736c9916fe6 1.86kB / 1.86kB 0.0s
=> => sha256:581b9e21ee31fd842281d787d150f92ff8850be52ab00ba7ad75f3167cb397a4 9.04kB / 9.04kB 0.0s
=> => sha256:48c2faf66abec3dce9f54d6722ff592fce6dd4fb58a0d0b72282936c6598a3b3 10.00MB / 10.00MB 50.9s
=> => sha256:b161a84f5d6d6a1588a3fbea618c464e0599094524a09b689cb3f455fbf33344 2.22kB / 2.22kB 0.0s
=> => sha256:004f1eed87df3f75f5e2a1a649fa7edd7f713d1300532fd0909bb39cd48437d7 50.43MB / 50.43MB 232.4s
=> => sha256:5d6f1e8117dbb1c6a57603cb4f321a861a08105a81bcc6b01b0ec2b78c8523a5 7.83MB / 7.83MB 23.8s
=> => sha256:234b70d0479d7f16d7ee8d04e4ffdacc57d7d14313faf59d332f18b2e9418743 51.84MB / 51.84MB 263.6s
=> => sha256:6fa07a00e2f029c4b2c7f177a2b696f1b3510040cde4f5bb06ddbca98e7fbf76 192.35MB / 192.35MB 479.7s
=> => sha256:04a31b4508b8e95fb3cb25486c4068185054895b12e0611e386a002ee9c0e07c 6.15MB / 6.15MB 268.2s
=> => extracting sha256:004f1eed87df3f75f5e2a1a649fa7edd7f713d1300532fd0909bb39cd48437d7 7.7s
=> => extracting sha256:5d6f1e8117dbb1c6a57603cb4f321a861a08105a81bcc6b01b0ec2b78c8523a5 0.9s
=> => extracting sha256:48c2faf66abec3dce9f54d6722ff592fce6dd4fb58a0d0b72282936c6598a3b3 0.9s
=> => sha256:9039bc4ee433b9b3917b5a7506b20c4335921059058fb626f21c10942f68bf1d 16.33MB / 16.33MB 326.7s
=> => extracting sha256:234b70d0479d7f16d7ee8d04e4ffdacc57d7d14313faf59d332f18b2e9418743 5.3s
=> => sha256:9d4af2d5ba9c3e8e735b90ee7041e6bb0d8a21d8e816a0ed781c4feca39d6da1 233B / 233B 276.2s
=> => sha256:dc46d185ff1dbb7fd44a2d045b44e3c8edeec432507a2c7712ff6d31bf802aec 2.17MB / 2.17MB 284.0s
=> => extracting sha256:6fa07a00e2f029c4b2c7f177a2b696f1b3510040cde4f5bb06ddbca98e7fbf76 12.2s
=> => extracting sha256:04a31b4508b8e95fb3cb25486c4068185054895b12e0611e386a002ee9c0e07c 0.5s
=> => extracting sha256:9039bc4ee433b9b3917b5a7506b20c4335921059058fb626f21c10942f68bf1d 1.1s
=> => extracting sha256:9d4af2d5ba9c3e8e735b90ee7041e6bb0d8a21d8e816a0ed781c4feca39d6da1 0.0s
=> => extracting sha256:dc46d185ff1dbb7fd44a2d045b44e3c8edeec432507a2c7712ff6d31bf802aec 0.3s
=> [internal] load build context 0.1s
=> => transferring context: 2.12kB 0.0s
=> [2/5] WORKDIR /code 1.4s
=> [3/5] COPY web_scraping_packages.txt . 0.1s
=> [4/5] RUN pip install -r web_scraping_packages.txt 7.9s
=> [5/5] COPY web_scraping_yahoo_finance_balance_sheet_server.py . 0.1s
=> exporting to image 0.3s
=> => exporting layers 0.3s
=> => writing image sha256:d36f971bc093676c0d91b95308e1a3097aff29de5b55c4cb912e7903cc60b49e 0.0s
=> => naming to docker.io/library/web-scraping:v1 0.0s

image

3) Running the docker image

C:\WINDOWS\system32>docker run -it --rm -p 5000:5000 web-scraping:v1
* Serving Flask app "web_scraping_yahoo_finance_balance_sheet_server" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
172.17.0.1 - - [03/Apr/2021 17:14:13] "GET / HTTP/1.1" 200 -

If you browse http://localhost:5000, you'll get the content of it in json format.

Once you are done, If you don't want to use the image anymore, use the following to delete it:

C:\WINDOWS\system32>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
web-scraping v1 791b2aef33a3 19 minutes ago 890MB

C:\WINDOWS\system32>docker rmi -f 791b2aef33a3
Untagged: web-scraping:v1
Deleted: sha256:791b2aef33a3641f7335102f7b8edeab2463b5e4eed046594cd5053c86a9c1f0

Running your Docker image in Kubernetes

** Remember to enable Kubernetes if you are using Docker desktop on windows: image

To make things a bit interesting, we are going to deploy the docker image into a Kubernetes cluster. To do this, we will need to create the configuration files for the deployment and the service.

Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
name: web-scraping-deployment
spec:
selector:
matchLabels:
app: web-scraping-pod
template:
metadata:
labels:
app: web-scraping-pod
spec:
containers:
- name: web-scraping-container
image: web-scraping:v1
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5000

Service:

apiVersion: v1
kind: Service
metadata:
name: web-scraping-service
spec:
selector:
app: web-scraping-pod
ports:
- port: 5000
targetPort: 5000
type: LoadBalancer

Then run the following:

C:\Source\Repos\web-scraping>kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* docker-desktop docker-desktop docker-desktop
docker-for-desktop docker-desktop docker-desktop
minikube minikube minikube
C:\Users\thund\Source\Repos\web-scraping>kubectl apply -f .\deployment.yml
deployment.apps/web-scraping-deployment created

C:\Users\thund\Source\Repos\web-scraping>kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
web-scraping-deployment 1/1 1 1 26s

C:\Users\thund\Source\Repos\web-scraping>kubectl apply -f .\service.yml
service/web-scraping-service created

C:\Users\thund\Source\Repos\web-scraping>kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8h
web-scraping-service LoadBalancer 10.98.0.224 localhost 5000:32024/TCP 9s

You should be able to see the following containers running in Docker:

image

After this, we will be able to see the content of it on http://localhost:5000

image


Source code with samples can be found here:

- https://github.com/JordiCorbilla/web-scraping 

OpenAI WhatsApp Bot with Python

$
0
0


The "OpenAI-Whatsapp-Bot" is a project focused on integrating OpenAI's language models with WhatsApp. This Python-based bot enables users to interact with AI directly through WhatsApp messages. Repo can be found here.






Setup environment:

  • Windows 11 PRO (Mini Server x64 Intel Celeron 12 Gb)
  • Python 3.12.1
  • Sql Server Express v15
  • Twilio Account
  • Ngrok
  • OpenAPI Key
  • Whatsapp (local phone)

1) Setting up your Twilio Account

  • Create a free account with Twilio here -> https://www.twilio.com/en-us
  • Setup the WhatsApp messaging using the Twilio Sandbox, you'll be given a +1 phone number to interact with:

2) Install Python and create your development environment

I've done this using the latest python available 3.12.1 (https://www.python.org/ftp/python/3.12.1/python-3.12.1-amd64.exe) at the time of this writing.

C:\repo\OpenAI-Whatsapp-Bot> python -m venv venv;
  • Enable the dev environment:
C:\repo\OpenAI-Whatsapp-Bot\venv\Scripts>activate.bat
  • Upgrade pip
(venv) C:\repo\OpenAI-Whatsapp-Bot\venv\Scripts>python.exe -m pip install --upgrade pip
Requirement already satisfied: pip in c:\repo\openai-whatsapp-bot\venv\lib\site-packages (23.2.1)
Collecting pip
  Obtaining dependency information for pip from https://files.pythonhosted.org/packages/15/aa/3f4c7bcee2057a76562a5b33ecbd199be08cdb4443a02e26bd2c3cf6fc39/pip-23.3.2-py3-none-any.whl.metadata
  Using cached pip-23.3.2-py3-none-any.whl.metadata (3.5 kB)
Using cached pip-23.3.2-py3-none-any.whl (2.1 MB)
Installing collected packages: pip
  Attempting uninstall: pip
    Found existing installation: pip 23.2.1
    Uninstalling pip-23.2.1:
      Successfully uninstalled pip-23.2.1
Successfully installed pip-23.3.2
  • Install Microsoft C++ Build Tools 



  • Install the requirements file using pip install -r requirements.txt:

aiohttp==3.9.1
aiohttp-retry==2.8.3
aiosignal==1.3.1
annotated-types==0.6.0
anyio==3.7.1
attrs==23.2.0
certifi==2023.11.17
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
distro==1.9.0
fastapi==0.109.0
frozenlist==1.4.1
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.2
httpx==0.26.0
idna==3.6
iniconfig==2.0.0
multidict==6.0.4
openai==0.28.0
packaging==23.2
pluggy==1.3.0
psycopg2-binary==2.9.9
pydantic==2.5.3
pydantic_core==2.14.6
PyJWT==2.8.0
pyngrok==7.0.5
pyodbc==5.0.1
pytest==7.4.4
python-decouple==3.8
python-multipart==0.0.6
PyYAML==6.0.1
requests==2.31.0
sniffio==1.3.0
SQLAlchemy==2.0.25
starlette==0.35.1
tqdm==4.66.1
twilio==8.11.1
typing_extensions==4.9.0
urllib3==2.1.0
uvicorn==0.26.0
yarl==1.9.4
(venv) C:\repo\OpenAI-Whatsapp-Bot>pip install -r requirements.txt
Collecting fastapi (from -r requirements.txt (line 1))
  Using cached fastapi-0.109.0-py3-none-any.whl.metadata (24 kB)
Collecting uvicorn (from -r requirements.txt (line 2))
  Using cached uvicorn-0.26.0-py3-none-any.whl.metadata (6.4 kB)
Collecting twilio (from -r requirements.txt (line 3))
  Using cached twilio-8.11.1-py2.py3-none-any.whl.metadata (12 kB)
Collecting openai (from -r requirements.txt (line 4))
  Using cached openai-1.9.0-py3-none-any.whl.metadata (18 kB)
Collecting python-decouple (from -r requirements.txt (line 5))
  Using cached python_decouple-3.8-py3-none-any.whl (9.9 kB)
Collecting sqlalchemy (from -r requirements.txt (line 6))
  Using cached SQLAlchemy-2.0.25-cp312-cp312-win_amd64.whl.metadata (9.8 kB)
Collecting psycopg2-binary (from -r requirements.txt (line 7))
  Using cached psycopg2_binary-2.9.9-cp312-cp312-win_amd64.whl.metadata (4.6 kB)
Collecting python-multipart (from -r requirements.txt (line 8))
  Using cached python_multipart-0.0.6-py3-none-any.whl (45 kB)
Collecting pyngrok (from -r requirements.txt (line 9))
  Using cached pyngrok-7.0.5-py3-none-any.whl.metadata (6.2 kB)
Collecting pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4 (from fastapi->-r requirements.txt (line 1))
  Using cached pydantic-2.5.3-py3-none-any.whl.metadata (65 kB)
Collecting starlette<0.36.0,>=0.35.0 (from fastapi->-r requirements.txt (line 1))
  Using cached starlette-0.35.1-py3-none-any.whl.metadata (5.8 kB)
Collecting typing-extensions>=4.8.0 (from fastapi->-r requirements.txt (line 1))
  Using cached typing_extensions-4.9.0-py3-none-any.whl.metadata (3.0 kB)
Collecting click>=7.0 (from uvicorn->-r requirements.txt (line 2))
  Using cached click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting h11>=0.8 (from uvicorn->-r requirements.txt (line 2))
  Using cached h11-0.14.0-py3-none-any.whl (58 kB)
Collecting requests>=2.0.0 (from twilio->-r requirements.txt (line 3))
  Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting PyJWT<3.0.0,>=2.0.0 (from twilio->-r requirements.txt (line 3))
  Using cached PyJWT-2.8.0-py3-none-any.whl.metadata (4.2 kB)
Collecting aiohttp>=3.8.4 (from twilio->-r requirements.txt (line 3))
  Using cached aiohttp-3.9.1-cp312-cp312-win_amd64.whl.metadata (7.6 kB)
Collecting aiohttp-retry>=2.8.3 (from twilio->-r requirements.txt (line 3))
  Using cached aiohttp_retry-2.8.3-py3-none-any.whl (9.8 kB)
Collecting anyio<5,>=3.5.0 (from openai->-r requirements.txt (line 4))
  Using cached anyio-4.2.0-py3-none-any.whl.metadata (4.6 kB)
Collecting distro<2,>=1.7.0 (from openai->-r requirements.txt (line 4))
  Using cached distro-1.9.0-py3-none-any.whl.metadata (6.8 kB)
Collecting httpx<1,>=0.23.0 (from openai->-r requirements.txt (line 4))
  Using cached httpx-0.26.0-py3-none-any.whl.metadata (7.6 kB)
Collecting sniffio (from openai->-r requirements.txt (line 4))
  Using cached sniffio-1.3.0-py3-none-any.whl (10 kB)
Collecting tqdm>4 (from openai->-r requirements.txt (line 4))
  Using cached tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
Collecting greenlet!=0.4.17 (from sqlalchemy->-r requirements.txt (line 6))
  Using cached greenlet-3.0.3-cp312-cp312-win_amd64.whl.metadata (3.9 kB)
Collecting PyYAML (from pyngrok->-r requirements.txt (line 9))
  Using cached PyYAML-6.0.1-cp312-cp312-win_amd64.whl.metadata (2.1 kB)
Collecting attrs>=17.3.0 (from aiohttp>=3.8.4->twilio->-r requirements.txt (line 3))
  Using cached attrs-23.2.0-py3-none-any.whl.metadata (9.5 kB)
Collecting multidict<7.0,>=4.5 (from aiohttp>=3.8.4->twilio->-r requirements.txt (line 3))
  Using cached multidict-6.0.4.tar.gz (51 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Collecting yarl<2.0,>=1.0 (from aiohttp>=3.8.4->twilio->-r requirements.txt (line 3))
  Using cached yarl-1.9.4-cp312-cp312-win_amd64.whl.metadata (32 kB)
Collecting frozenlist>=1.1.1 (from aiohttp>=3.8.4->twilio->-r requirements.txt (line 3))
  Using cached frozenlist-1.4.1-cp312-cp312-win_amd64.whl.metadata (12 kB)
Collecting aiosignal>=1.1.2 (from aiohttp>=3.8.4->twilio->-r requirements.txt (line 3))
  Using cached aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Collecting idna>=2.8 (from anyio<5,>=3.5.0->openai->-r requirements.txt (line 4))
  Using cached idna-3.6-py3-none-any.whl.metadata (9.9 kB)
Collecting colorama (from click>=7.0->uvicorn->-r requirements.txt (line 2))
  Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting certifi (from httpx<1,>=0.23.0->openai->-r requirements.txt (line 4))
  Using cached certifi-2023.11.17-py3-none-any.whl.metadata (2.2 kB)
Collecting httpcore==1.* (from httpx<1,>=0.23.0->openai->-r requirements.txt (line 4))
  Using cached httpcore-1.0.2-py3-none-any.whl.metadata (20 kB)
Collecting annotated-types>=0.4.0 (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi->-r requirements.txt (line 1))
  Using cached annotated_types-0.6.0-py3-none-any.whl.metadata (12 kB)
Collecting pydantic-core==2.14.6 (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi->-r requirements.txt (line 1))
  Using cached pydantic_core-2.14.6-cp312-none-win_amd64.whl.metadata (6.6 kB)
Collecting charset-normalizer<4,>=2 (from requests>=2.0.0->twilio->-r requirements.txt (line 3))
  Using cached charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl.metadata (34 kB)
Collecting urllib3<3,>=1.21.1 (from requests>=2.0.0->twilio->-r requirements.txt (line 3))
  Using cached urllib3-2.1.0-py3-none-any.whl.metadata (6.4 kB)
Using cached fastapi-0.109.0-py3-none-any.whl (92 kB)
Using cached uvicorn-0.26.0-py3-none-any.whl (60 kB)
Using cached twilio-8.11.1-py2.py3-none-any.whl (1.7 MB)
Using cached openai-1.9.0-py3-none-any.whl (223 kB)
Using cached SQLAlchemy-2.0.25-cp312-cp312-win_amd64.whl (2.1 MB)
Using cached psycopg2_binary-2.9.9-cp312-cp312-win_amd64.whl (1.2 MB)
Using cached pyngrok-7.0.5-py3-none-any.whl (21 kB)
Using cached aiohttp-3.9.1-cp312-cp312-win_amd64.whl (362 kB)
Using cached anyio-4.2.0-py3-none-any.whl (85 kB)
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Using cached distro-1.9.0-py3-none-any.whl (20 kB)
Using cached greenlet-3.0.3-cp312-cp312-win_amd64.whl (293 kB)
Using cached httpx-0.26.0-py3-none-any.whl (75 kB)
Using cached httpcore-1.0.2-py3-none-any.whl (76 kB)
Using cached pydantic-2.5.3-py3-none-any.whl (381 kB)
Using cached pydantic_core-2.14.6-cp312-none-win_amd64.whl (1.9 MB)
Using cached PyJWT-2.8.0-py3-none-any.whl (22 kB)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Using cached starlette-0.35.1-py3-none-any.whl (71 kB)
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Using cached typing_extensions-4.9.0-py3-none-any.whl (32 kB)
Using cached PyYAML-6.0.1-cp312-cp312-win_amd64.whl (138 kB)
Using cached annotated_types-0.6.0-py3-none-any.whl (12 kB)
Using cached attrs-23.2.0-py3-none-any.whl (60 kB)
Using cached certifi-2023.11.17-py3-none-any.whl (162 kB)
Using cached charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl (100 kB)
Using cached frozenlist-1.4.1-cp312-cp312-win_amd64.whl (50 kB)
Using cached idna-3.6-py3-none-any.whl (61 kB)
Using cached urllib3-2.1.0-py3-none-any.whl (104 kB)
Using cached yarl-1.9.4-cp312-cp312-win_amd64.whl (76 kB)
Building wheels for collected packages: multidict
  Building wheel for multidict (pyproject.toml) ... done
  Created wheel for multidict: filename=multidict-6.0.4-cp312-cp312-win_amd64.whl size=28182 sha256=ec4f815cc6a33c1f931ddf3bdfd1ae03acb6868b33a27ed22107bfeb83367e7e
  Stored in directory: c:\users\jordi\appdata\local\pip\cache\wheels\f6\d8\ff\3c14a64b8f2ab1aa94ba2888f5a988be6ab446ec5c8d1a82da
Successfully built multidict
Installing collected packages: python-decouple, urllib3, typing-extensions, sniffio, PyYAML, python-multipart, PyJWT, psycopg2-binary, multidict, idna, h11, greenlet, frozenlist, distro, colorama, charset-normalizer, certifi, attrs, annotated-types, yarl, tqdm, sqlalchemy, requests, pyngrok, pydantic-core, httpcore, click, anyio, aiosignal, uvicorn, starlette, pydantic, httpx, aiohttp, openai, fastapi, aiohttp-retry, twilio
Successfully installed PyJWT-2.8.0 PyYAML-6.0.1 aiohttp-3.9.1 aiohttp-retry-2.8.3 aiosignal-1.3.1 annotated-types-0.6.0 anyio-4.2.0 attrs-23.2.0 certifi-2023.11.17 charset-normalizer-3.3.2 click-8.1.7 colorama-0.4.6 distro-1.9.0 fastapi-0.109.0 frozenlist-1.4.1 greenlet-3.0.3 h11-0.14.0 httpcore-1.0.2 httpx-0.26.0 idna-3.6 multidict-6.0.4 openai-1.9.0 psycopg2-binary-2.9.9 pydantic-2.5.3 pydantic-core-2.14.6 pyngrok-7.0.5 python-decouple-3.8 python-multipart-0.0.6 requests-2.31.0 sniffio-1.3.0 sqlalchemy-2.0.25 starlette-0.35.1 tqdm-4.66.1 twilio-8.11.1 typing-extensions-4.9.0 urllib3-2.1.0 uvicorn-0.26.0 yarl-1.9.4
CREATETABLEConversations(
    id int identity(1,1) primary key,
    sender varchar(max) null,
    message varchar(max) null,
    response varchar(max) null
)
  • Setting up FAST API

Set a barebone API (bot.py):

fromfastapiimportFastAPIapp=FastAPI()

@app.get("/")asyncdefindex():
    return {"msg": "running"}

Run the API using uvicorn bot:app --reload in you virtual environment:

C:\repo\OpenAI-Whatsapp-Bot\venv\Scripts>activate.bat

then:

(venv) C:\repo\OpenAI-Whatsapp-Bot>uvicorn bot:app --reload
INFO:     Will watch for changes in these directories: ['C:\\repo\\OpenAI-Whatsapp-Bot']
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [248] using StatReload
INFO:     Started server process [7056]
INFO:     Waiting for application startup.
INFO:     Application startup complete.

3) Setup Ngrok

Go to Ngrok and download the utility (https://ngrok.com/download) then just expose your port 8000 to ngrok via ngrok http http://localhost:8000 if you don't have a domain. Ngrok by default will provide a random domain e.g.  https://621c00345df3.ngrok.app

Navigate to your Twilio account and enter the end-point in the sandbox configuration:



4) Python API interaction

After running the bot.py and setting up the twilio account and interacting with whatsapp, you'll be able to see the activity on your screen:

image

5) Database Message Storage

As each message gets



6) Final Result, talking through WhatsApp



Running a local LLM on your desktop/server

$
0
0
This article provides guidance on setting up and running a local Large Language Model (LLM) on a desktop environment. The repository contains relevant resources and code, primarily in Jupyter Notebook format, to facilitate the installation and operation of a local LLM. For detailed instructions and code, visit the GitHub repository.








Download LM Studio:

image

  • Run the setup file (LM-Studio-0.2.12-Setup.exe) to install LM Studio locally.

  • Once installed, LM Studio will open:

  • image

    Download LLMs:

    • Search for the latest Llama-2 model and install the one that will fit in your machine:

    image

    • Navigate to the chat section and select the downloaded model and start chatting!:

    image

    Enable the local inference server

    • We can expose this model so we can access it programmatically:
    • Navigate to the local server tab and start the server on port 1234.

    image

    Use OpenAI API to talk to the model

    Get my Jupyter Notebook and run it:

    https://github.com/JordiCorbilla/Running_Local_LLM/blob/main/Running_LLM_Locally.ipynb

    image

    Now you can talk to the LLM locally and you can even expose this externally using ngrok.

Getting Started with ComfyUI: A Simple Guide

$
0
0

 

If you're diving into the world of AI art, you've probably heard about Stable Diffusion and how it’s revolutionizing digital creativity. ComfyUI is one of the most intuitive and powerful interfaces to work with Stable Diffusion, and in this blog post, I'll guide you through the installation process and introduce you to a basic workflow.

What is ComfyUI?

ComfyUI is a user-friendly graphical interface designed for interacting with Stable Diffusion models. It allows you to easily set up workflows for generating images, making it an excellent tool for both beginners and advanced users in AI art.

How to Install ComfyUI

Installing ComfyUI is incredibly straightforward:

  1. Download the ComfyUI package from the official website or repository.
  2. Extract the downloaded files to a location on your computer.
  3. Run the installation by simply double-clicking the run_nvidia_gpu.bat file located in the ComfyUI_windows_portable folder.

And that's it! ComfyUI will launch, and you’ll be ready to start creating.

Understanding Stable Diffusion and LoRA

Before jumping into workflows, it's essential to understand a couple of key concepts:

  • Stable Diffusion: This is a powerful AI model that generates images based on textual descriptions. It’s highly versatile and can create anything from realistic photos to abstract art.


  • LoRA (Low-Rank Adaptation): LoRA is a method used to fine-tune and adapt large language models like Stable Diffusion to specific tasks or styles. By applying LoRA, you can create images in a particular style, such as anime or manga, with greater accuracy.

Basic Workflow: DBZ Style Images

For those looking to create Dragon Ball Z (DBZ) style images using ComfyUI, I've put together a simple workflow that utilizes a few custom nodes.

Resources You’ll Need:

  1. AutismMix SDXL: This model is essential for generating high-quality base images. You can download it here. Place the file autismmixSDXL_autismmixDPO.safetensors in your ComfyUI_windows_portable\ComfyUI\models\checkpoints folder.

  2. Vixon's Anime/Manga Styles: This is a LoRA model that tailors the image generation process to the specific style of Dragon Ball Z. Download it here and place the file DragonBallZXLP.safetensors in your ComfyUI_windows_portable\ComfyUI\models\loras folder.

Setting Up the Workflow:

  1. Download the DBZ Workflow: You can grab the pre-configured workflow here.


  1. Load the Workflow in ComfyUI:

    • Open ComfyUI.
    • Navigate to the workflow section and load the dbz.json file you just downloaded.
  2. Run the Workflow: Hit the 'Queue Prompt' button to generate your first DBZ-style image.

Conclusion

ComfyUI makes it easy to experiment with Stable Diffusion models, and with the right resources and workflows, you can create stunning images tailored to your specific needs. Whether you’re new to AI art or an experienced user, ComfyUI offers a seamless way to bring your creative visions to life.

Feel free to download and experiment with the workflows from my blog, and let your creativity soar!

Viewing all 71 articles
Browse latest View live


Latest Images