OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 3 hours 19 min ago

Adding Tracing to Your Distributed Cloud Native Microservices

Mon, 2021-04-05 08:00

When adopting cloud-native technologies and certain architectures such as the microservice pattern, observability and monitoring become a huge need and a high priority for many development teams. On the “monitoring” side, I recently blogged about using Micronaut’s built-in support for Micrometer and the OCI SDK integrations to collect and analyze your server and application-related performance metrics with OCI Monitoring. But what about “observability”? It’s just as important to be able to trace and analyze requests across your distributed services so you can obtain a complete picture and be able to pinpoint bottlenecks and issues before they become a real headache. To that end, I want to talk to you about adding tracing to your Micronaut applications. Just as you’d expect, there is plenty of support for adding tracing to your applications in the Micronaut ecosystem. Is it easy to integrate this support into your OCI environment? Let’s take a look.

Tracing Requests with Micronaut

Micronaut features support for integrating with the two most popular solutions for tracing: Zipkin and Jaeger. To get comfortable with tracing, let’s launch Zipkin locally and create two simple microservices that communicate to see how distributed tracing works.

Launch Zipkin

The quickest and easiest way is to launch a Docker container.

Hit localhost:9411 in your browser to make sure it’s up and running.

Generate & Configure Microservices

Using the Micronaut CLI, generate two services. Include the management and tracing-zipkin features.

Edit src/main/resources/application.yml in demo1 to configure a few variables and point the application at the local Zipkin install.

Configure demo2 to run on port 8081 (to avoid conflict with demo1) and point at the local Zipkin install as well.

Create Controllers

Starting with demo2, create a controller that returns a “favorite number” for a user based on their ID. We use the special annotation @ContinueSpan to indicate that we want to group this endpoint along with whatever request called it in our traces. The @SpanTag annotation on the method parameter lets us pull out specific variables to include in our tracing spans so that we can filter or use them for troubleshooting later on.

Next, in the demo1 service, create a declarative HTTP client that can be used to make calls to demo2 from demo1.

Now we’ll create a controller in demo1 that has a few endpoints for testing. Note that we’re injecting the Demo2Client and making a call to demo2 from demo1 in the /user/{id} endpoint.

We can run each service at this point and make some calls to the various endpoints. Take a look at Zipkin and see how it handles tracing for the microservices. 

Now drill in to one of the /user/{id} calls (by clicking on ‘Show’) to see the spans from demo2 included in the trace.

Click on the ‘Demo2’ span to highlight the row and then click ’Show Annotations’ on the right-hand side to view span details and the user.id that we tagged with the @SpanTag annotation.

We can also use the user.id to query spans.

As you can see, tracing distributed microservices with Micronaut and Zipkin is not difficult. However, it does require that you install, configure, maintain, and secure your own Zipkin install. For larger teams with a strong DevOps presence, this isn’t a problem. But for smaller teams or organizations who don’t have the resources to dedicate to infrastructure management, is there a managed service option? The answer to that question is almost always “yes”, but that answer invariably leads to the next obvious question: “how difficult is it to migrate to the managed option and what will it take to migrate off of it if we ever have to”? Those are fair questions - and as usual with Oracle Cloud Infrastructure, you have an option that is fully compatible with the popular industry standard that can be dropped in with just minor config changes. Let’s look at using Application Performance Monitoring for our tracing endpoint instead of Zipkin.

Using OCI Application Performance Monitoring as a Drop-In Tracing Replacement

OCI Application Performance Monitoring (APM) is a suite of services that give you insight into your applications and servers running in OCI via a small agent that runs on the machine and aggregates and reports metric data. It’s a nice service to monitor and diagnose performance issues. It also includes a Trace Explorer that is Zipkin (and Jaeger) compatible and we can use that Trace Explorer from our Micronaut applications (even without taking full advantage of APM via the Java Agent). Let’s swap out Zipkin for APM Trace Explorer in our microservices.

Create Cloud Configuration

In the demo1 project, create a new file in src/main/resources/ called application-oraclecloud.yml. This file will automatically be used when your application runs in the Oracle Cloud thanks to Micronaut’s environment detection features.

Do the same for demo2.

Create an APM Domain

Now, in the OCI console, create an APM domain. We’ll share a single domain that will be used to group and trace all of our services. I know that may seem a bit confusing given the name ‘domain’, but think of it more like a “project group” or an “environment” (you may want to create separate domains for QA, Test, Prod, etc). Search for ‘Application Performance Monitoring’ and click on ‘Administration’.

In the left sidebar, click on ‘APM Domains’.

Click on ‘Create APM Domain’.

Name it, choose a compartment and enter a description.

Once the domain is created, view the domain details. Here you’ll need to grab a few values, so copy the data upload endpoint (#1), private key (#2), and public key (#3).

Now we have what we need to construct a URL to plug in to our application config files. The ‘Collector URL’ format requires us to construct a URL by using the data upload endpoint as our base URL and generate the path based on some choices including values from our private or public key. The format is documented here. Once we’ve constructed the URL path, we can plug it in to our application-oraclecloud.yml config. Since we use the same domain for both services, the URL and path would be the same for both config files.

If you wanted to keep these values out of the config file, you could alternatively set them as environment variables on the server like so:

And that’s it! Just by creating an APM domain and plugging in our new URL and path our application will start producing tracing data to APM. We can run a few requests and then head to the Trace Explorer in the OCI console to view, search and filter our traces just like we did in Zipkin. 

Choose your APM domain in the top right and the time period that you’d like to view/search.

Choose one of the available pre-configured queries across the top.

View traces and spans:

Click on a trace to view detailed info.

Click on a span inside a trace to view detailed info and tagged values.

Read more about the Trace Explorer in the documentation.

Summary

In this post, we looked at how to use tracing to gain insight into our Micronaut microservices. We first looked at using Zipkin, then we switched to the fully managed OCI Trace Explorer with nothing but a few changes to our configuration.

If you’d like to see the code used in this demo, check out the following GitHub repos.
 

I hope you enjoyed this look at tracing in the cloud. If there is another topic you’d like to see covered here on the developer blog, please drop a comment below!

Image by Free-Photos from Pixabay 

.cke_editable p { border: 1px dashed; min-height: 20px; border-radius: 5px; padding: 5px;} .backgroundOverlay, .delayedPopupWindow { display: none !important } .gist { border-left: none !important;} @media only screen and (max-width:640px){.gist .blob-num{padding: 1px 10px!important;} .gist table{display:table}.gist tr{display:table-row}.gist td{display:table-cell;border:1px solid #ddd}.gist table td:before{position: inherit;top:0;left:0;padding-right:0}} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; } .intro, .info, .success, .warning, .error { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); border: 1px solid; border-radius: 2px; margin: 10px 0px; padding:15px 10px;} .info { color: #00529b; background-color: #bde5f8;} .success { color: #4f8a10; background-color: #dff2bf;} .warning { color: #9f6000; background-color: #FEEFB3;} .error { color: #D8000C; background-color: #FFBABA;} .intro { color: #484848; background-color: #f5f5f5; border: 1px solid; } .shadow { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); }
let postContainer = document.querySelector('.cb11v2-posturltracking'); let minuteCount = 0; if( postContainer ) { minuteCount = Math.round(postContainer.textContent.split(" ").filter((t) => t != "" && t != "\n").length / 265); } if( postContainer && minuteCount && minuteCount > 0 ) document.querySelector('.u03-date').append(` • ${minuteCount} minute read`);

Podcast #391: Jeff Smith on Helping Developers Get the Most out of Oracle Database

Wed, 2021-03-31 09:25

I've always been interested in how people with serious technical skills solve difficult problems and create new opportunities. Sometimes it just seems like magic. But they don’t just do this alone. They create leverage by using advanced tools to help extend their ideas and implement their solutions. And they also collaborate in teams internally at the company and with developers in software communities globally.

Jeff Smith is one of those people. He’s a distinguished product manager on the Oracle Database team, and he’s been working with database technology for 20 years. He’s experienced everything along the way, and I check in with him occasionally to see what’s going on. I’m never disappointed.

In this conversation, we talked about some interesting features that have been emerging in the Oracle Database recently, such as a fully integrated development environment right in the console for rapidly building RESTful web services. With just a few clicks, developers can be coding SQL or uploading JSON documents with predefined, editable schemas, so developers can have the best of both worlds. And there are many more new bits in there to help developers and administrators be more productive.

But it goes well beyond technology. Just like his colleagues on the database team, Jeff is fully embedded in the community and has been so for decades. I’ve always told members of the Oracle community that it’s important for them to realize that they play a critical role in how the technology advances. Jeff agrees and says the influence of the community for Oracle Database is huge. In this podcast, he tells the story of how he’s seen code that was directly influenced by interactions with the community integrated into GitHub repositories very rapidly.

We talked about many more things, of course. Give a listen below and let us know what you think. Cheers.

"It's all about talking to customers. I really like sharing our story and the technology that our brilliant engineers have built." - Jeff Smith

Some Additional Related Podcast Links

Oracle Groundbreakers Podcast Links

Project GreenThumb Part 5 - The Front-End, Build Pipeline, Push Notifications and Overall Progress

Wed, 2021-03-31 07:00

In this short blog series, I introduced you to Project GreenThumb, a project that I created to automate and monitor the process of growing seedlings with hardware, software and the cloud. If you haven’t read the other posts in this series, I encourage you to do so.

In this post, we’ll wrap things up by looking at the front-end, how I automated the application build to deploy things to the cloud, how I added support for push notifications and finally we’ll look at the current progress of the project against the stated goals. 

Adding Simple Views to the Micronaut Application

Micronaut clearly shines as a “data first” cloud-native microservice platform, but what you may not know is that it also includes support and integrations for server-side view rendering. To avoid blocking the Netty event loop, Micronaut handles server-side view rendering on the I/O thread pool. A number of view rendering engines are supported (Handlebars, Velocity, Freemarker, etc) but I chose Thymeleaf because of my slight familiarity with it over the other choices. To render a view, your controller must return a ModelAndView object which contains the name of the view template to render and the object to use as the model.

The view can access any model variable with the familiar ${variable} syntax. 

The Front-End

There was no need to complicate the front-end. I just needed to present the data in a way that would give me a quick, full overview of the current sensor data and I feel like I accomplished that with a simplified, yet responsive layout.

The home view connects to the WebSocket server endpoint that I previously established and updates a list of reading values in memory (limiting it to the 50 most recent readings) when a new message is received. 

For reports, a single page outputs a number of various views of the aggregated sensor data. For example, I can see the average readings by hour of day for the current day which lets me make necessary adjustments if things look out of the ordinary.

I can also gauge the long-term success of the project by seeing the averages by hour of day for all time.

Or by looking at the daily average by day:

Of course, since I have different goals for “day” vs. “night”, I need a report that shows the progress against those metrics:

And finally, an overall total average for the sensor data.

The Build (GitHub Actions)

Of course, no project would be complete without automating the build process. For that, I added a GitHub Actions workflow. The workflow checks out the code, then builds the JAR file:

Then logs in to the Oracle Cloud Infrastructure Registry builds a Docker image (using the out-of-the-box Dockerfile provided by Micronaut) and pushes the Docker image to OCIR.

Finally, I log in to my VM, stop the existing Docker image, and pull and run the latest image on the VM:

Push Notification Alerts

What good is collecting a bunch of data if there is no automated call to action when the data indicates it is necessary? I could have easily added some automated watering to the project, but since I’m new to growing things like this, I wanted to maintain some granulated (manual) control over that process until I was more comfortable with it. I figured it would be super handy to add push notifications using Pushover so that I would get a notification when the soil moisture indicated that I should take a look at things and water the seedlings. To integrate with my Pushover account I could have dropped in the pushover4j library (side note: enough with the “4j” projects already!) but since it’s just a POST request to the API, I decided to avoid adding another dependency and just use a declarative http client with Micronaut.  First, I set up my Pushover config.

Next, I created a POJO to contain the API response:

Finally, I created the client interface. Micronaut will handle all the necessary plumbing at compile-time and the client is ready to use in the application.

Then I injected the client into my MQTT consumer, checked the readings when a message is received, and send a push notification (throttled to once every 20 minutes) if the soil moisture level dropped below 50%. Of course, this could be extended to other metrics and thresholds as necessary just by modifying the message argument as appropriate.

Here’s what the notification looks like on my Pixel3 XL device.

When I click on the notification I get a link to follow directly to the dashboard.

The Progress (Results)

As of publishing, it’s been 3 weeks since the seedlings were planted. There have been some minor adjustments to both hardware and software as I learn what works best for the system, but so far the results are mostly in range (or certainly extremely close to the targets). 

If we look at it from an “overall” standpoint (disregarding night/day or daily/hourly breakdown):

Data, however, only tells half the story. The results that really matter are whether or not the seedlings have sprouted and are looking healthy. To that end:

 

Summary

This story doesn’t conclude today and when it does conclude later on this year it will be much more difficult to quantify the achievement. You see, to me the success of this project relies on the flavor and heat of the hot sauce that it will ultimately result in, and flavor is truly subjective and depends on the tastes of the person who is judging the product. But I guess there’s another way to gauge the success of this project and that is to look at the value of the experience itself. In that light, I feel like the project has already been a huge success because it gave me something to plan, build, learn from, and share with the developer communities that I work with every day. 

If you'd like to check out any of the code used in this blog series, please refer to the appropriate repos on GitHub:

.cke_editable p { border: 1px dashed; min-height: 20px; border-radius: 5px; padding: 5px;} .backgroundOverlay, .delayedPopupWindow { display: none !important } .gist { border-left: none !important;} @media only screen and (max-width:640px){.gist .blob-num{padding: 1px 10px!important;} .gist table{display:table}.gist tr{display:table-row}.gist td{display:table-cell;border:1px solid #ddd}.gist table td:before{position: inherit;top:0;left:0;padding-right:0}} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; } .intro, .info, .success, .warning, .error { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); border: 1px solid; border-radius: 2px; margin: 10px 0px; padding:15px 10px;} .info { color: #00529b; background-color: #bde5f8;} .success { color: #4f8a10; background-color: #dff2bf;} .warning { color: #9f6000; background-color: #FEEFB3;} .error { color: #D8000C; background-color: #FFBABA;} .intro { color: #484848; background-color: #f5f5f5; border: 1px solid; } .shadow { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); }
let postContainer = document.querySelector('.cb11v2-posturltracking'); let minuteCount = 0; if( postContainer ) { minuteCount = Math.round(postContainer.textContent.split(" ").filter((t) => t != "" && t != "\n").length / 265); } if( postContainer && minuteCount && minuteCount > 0 ) document.querySelector('.u03-date').append(` • ${minuteCount} minute read`);

Collect and Analyze Application and Server Metric Data with Micronaut's Support for OCI Monitoring

Tue, 2021-03-30 08:00

The Micronaut team over in Oracle Labs have been hard at work on a number of impressive features and framework improvements and they’re moving so fast that I can hardly keep up with all the awesomeness. A few weeks ago an update was released on the Micronaut module for Oracle Cloud and I blogged about the automatic wallet download and configuration for Autonomous DB connections from Micronaut, but there was another feature in that release that I didn’t have a chance to blog about at the time: support for Micrometer Support for OCI Monitoring. This powerful feature uses Micronaut’s support for Micrometer to let your applications report an abundance of valuable server and application insight directly into OCI monitoring where it can be sliced and diced as your team sees fit. You can even create alarms and send notifications based on the metrics collected. Best of all - it’s really simple to use and requires nothing but a bit of configuration in your application. Let me show you how!

Configuring Your App

As I stated just a second ago, it’s really just a matter of configuring your application to collect and report these metrics to OCI monitoring. If you are creating a new app from scratch, make sure to add the oracle-cloud-sdk feature.

Next, as with any feature in the OCI Module, you must configure an auth provider. On my localhost, I just use a config file provider.

Of course, when I deploy to OCI I usually use an instance principal, so my configuration for that looks like so:

Note: Naming the configuration file with the special suffix -oraclecloud will ensure that this config file gets automatically picked up and used when deployed to OCI thanks to Micronaut’s automatic environment detection feature.

Next, add dependencies for micronaut-oraclecloud-micrometer, the OCI Monitoring SDK and the standard Micronaut Micrometer dependencies

Now we just modify src/main/resources/application.yml to configure Micrometer. The namespace and resourceGroup will be how you find your metrics in the OCI console later on. 

When the application is launched, it will now start reporting metrics to OCI monitoring!

View Metrics

By default, your application and server metrics are reported every 60 seconds (this is configurable). After the application has been running a short while, you can now check the Metrics Explorer in the OCI console to view the data.

Choose the metric namespace (#1) and resource group (#2) that you entered in your config and then select the metric (#3), interval (#4), and statistic (#5).

For example, a simple look at incoming HTTP requests.

Or memory used:

Or CPU usage:

There are a surprisingly high amount of metrics that are reported. Everything from JVM stats, to system metrics like process uptime and system load. You can also enable metrics for Hibernate, JDBC connection pools, or even create your own custom metrics. See docs for more info.

Summary

In this post, we looked at how to configure your Micronaut application to report application and server metrics to the OCI monitoring service. We also looked at how to view the collected data in the OCI console. 

Photo by Miguel A. Amutio on Unsplash

.cke_editable p { border: 1px dashed; min-height: 20px; border-radius: 5px; padding: 5px;} .backgroundOverlay, .delayedPopupWindow { display: none !important } .gist { border-left: none !important;} @media only screen and (max-width:640px){.gist .blob-num{padding: 1px 10px!important;} .gist table{display:table}.gist tr{display:table-row}.gist td{display:table-cell;border:1px solid #ddd}.gist table td:before{position: inherit;top:0;left:0;padding-right:0}} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; } .intro, .info, .success, .warning, .error { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); border: 1px solid; border-radius: 2px; margin: 10px 0px; padding:15px 10px;} .info { color: #00529b; background-color: #bde5f8;} .success { color: #4f8a10; background-color: #dff2bf;} .warning { color: #9f6000; background-color: #FEEFB3;} .error { color: #D8000C; background-color: #FFBABA;} .intro { color: #484848; background-color: #f5f5f5; border: 1px solid; } .shadow { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); }
let postContainer = document.querySelector('.cb11v2-posturltracking'); let minuteCount = 0; if( postContainer ) { minuteCount = Math.round(postContainer.textContent.split(" ").filter((t) => t != "" && t != "\n").length / 265); } if( postContainer && minuteCount && minuteCount > 0 ) document.querySelector('.u03-date').append(` • ${minuteCount} minute read`);

Project GreenThumb Part 4 - Reporting Queries and WebSockets

Mon, 2021-03-29 07:00

In the last post in this series, we looked at the database schema behind the scenes, how I created the Micronaut application and consumed the sensor readings and set up the application for sensor reading persistence with Micronaut Data. In this post, we’ll look at reporting and how that’s accomplished as well as how I added WebSocket support to the application to push the sensor readings to the front-end in real-time. We’ll wrap things up in the next post with the front-end and a look at the current progress for Project GreenThumb.

Reporting Queries

In addition to the interface based repository for basic CRUD operations that we looked at in the last post, I created an abstract class for Reading that gives me the ability to inject an EntityManager so that I can create and run native SQL queries against my GREENTHUMB_READINGS table for use in some of the advanced reports that I wanted to include in the application.

I mentioned above that storing the reading JSON in a column would still allow us to query against the JSON data using familiar SQL. This was especially important as I really wanted the ability to view the aggregate data from different viewpoints. For example, I wanted to view the aggregate data by hour of the day, or day of the month. Also, I wanted to be able to compare periods like night vs. day to see if I was meeting the stated goals of the project. 

Viewing all of the data was easy:

Which gives me:

If I need to pull elements out of the JSON, I can do that:

Which means I can start aggregating and grouping the output:

For performance, I turned this into a materialized view that refreshes itself every 5 minutes (there’s no real need for “live” data for these reports).

As you can see, this gives me the ability to construct all of the queries that I need to view the sensor data from multiple dimensions. Plugging these queries into the Micronaut application is a matter of creating an AbstractReadingRepository, injecting an EntityManager, and running native queries that are mapped to DTOs.  Essentially, like this:

WebSockets

Right out-of-the-box, Micronaut includes full support for WebSocket clients and servers. Adding a WebSocket server is a matter of creating a class annotated with @ServerWebSocket which accepts a URI argument that will represent the server endpoint. Methods of the server class are then annotated with @OnOpen, @OnMessage, or @OnClose to represent the handlers called for the appropriate server action. A WebSocketBroadcaster is injected (and available to be injected elsewhere in the application) that is used to broadcast messages to connected clients. The broadcaster has methods for both blocking (broadcastSync) and non-blocking (broadcastAsync).

For this project, I wanted a way to be able to push the sensor data to the front-end in real-time, so I added a WebSocket server endpoint.

 

With the WebSocket server and persistence tier now in place, I could finally modify the MQTT consumer to persist the message to the DB and broadcast it to any WebSocket clients. For this, I edited the GreenThumbConsumer.

At this point, the application was ready for a front-end that would consume the real-time data, chart it, and present a few reports.

Summary

In this post, we looked at the SQL queries used for reporting on the collected sensor data and pushed it in real-time to clients connected to the WebSocket endpoint that I established. In the next post, we’ll look at the front-end, the automated build process, push notifications, and talk about the current progress of Project GreenThumb!

Photo by Christopher Robin Ebbinghaus on Unsplash

.cke_editable p { border: 1px dashed; min-height: 20px; border-radius: 5px; padding: 5px;} .backgroundOverlay, .delayedPopupWindow { display: none !important } .gist { border-left: none !important;} @media only screen and (max-width:640px){.gist .blob-num{padding: 1px 10px!important;} .gist table{display:table}.gist tr{display:table-row}.gist td{display:table-cell;border:1px solid #ddd}.gist table td:before{position: inherit;top:0;left:0;padding-right:0}} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; } .intro, .info, .success, .warning, .error { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); border: 1px solid; border-radius: 2px; margin: 10px 0px; padding:15px 10px;} .info { color: #00529b; background-color: #bde5f8;} .success { color: #4f8a10; background-color: #dff2bf;} .warning { color: #9f6000; background-color: #FEEFB3;} .error { color: #D8000C; background-color: #FFBABA;} .intro { color: #484848; background-color: #f5f5f5; border: 1px solid; } .shadow { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); }
let postContainer = document.querySelector('.cb11v2-posturltracking'); let minuteCount = 0; if( postContainer ) { minuteCount = Math.round(postContainer.textContent.split(" ").filter((t) => t != "" && t != "\n").length / 265); } if( postContainer && minuteCount && minuteCount > 0 ) document.querySelector('.u03-date').append(` • ${minuteCount} minute read`);

Project GreenThumb Part 3 - Consuming and Persisting the Sensor Data in the Cloud

Fri, 2021-03-26 07:00

In my last post, we looked at how the Arduino code reads from the attached sensors and publishes those readings to a message queue running in the cloud. That’s only half of the technical story though, as the data doesn’t do me any good until it’s persisted and visualized! In this post, we’ll start to look at how that’s done.

The Persistence & Visualization App Build

As mentioned in part one, I chose Micronaut (my favorite Java framework) for the monolithic cloud application that consumes the MQTT messages, persists them, and pushes them in real-time to a simple HTML front-end via websockets. The data is persisted into a table in an Autonomous DB instance in the cloud. I know that sounds complicated, but I promise, it’s really not! Let’s look at how I accomplished this, first by looking at the DB instance and schema. After that, we’ll dig into the Micronaut application.

The DB Instance & Schema

Autonomous DB is a full-blown cloud database that is completely managed. You can create an instance and be up and running in just a few minutes. 

New To Autonomous DB? Check out The Complete Guide To Getting Up And Running With Autonomous Database In The Cloud.

Once my instance was up and running, I created the table that I would use to store the readings. Since the reading data was a bit unstructured (the values from the readings are not all the same data type) I could handle this in several ways. The first approach would be to create tables for each different sensor and type the value column accordingly, but that’s a bit rigid. I decided to go with the second option: create a table with a JSON column and store the entire JSON object for each timestamped reading. This allowed me to adapt to any future variations in the data (additional sensors would just be part of the JSON) but let me remain flexible with my querying since Autonomous DB has full support for JSON columns! Here’s the DDL I used to create the table:

I’ve got an ID column that uses auto-number for the primary key, the READING column is defined as a CLOB (and is constrained to ensure that it is valid JSON), and a CREATED_ON column to contain the timestamp that the reading was obtained. That’s it - that’s the entire schema. Just a simple table that I'll use to store the message JSON, but as you’ll see later on it remains plenty flexible so that I can create a wide range of reports based on the data.

The Micronaut Application

Let’s take a look at some of the highlights of the Micronaut application that I created to persist the readings and publish the front-end. 

Show Me The Code! We’ll look at the exciting parts below, but if you want to see the entire application you can check out the project on GitHub.

Creating The App

Bootstrapping an app with Micronaut can be done in several ways. I like to use the web-based Micronaut Launch because it’s difficult to remember every single option when creating a new project and the UI gives you nice dropdown options for configuration, but you can also use the Micronaut CLI with the mn create-app command. For this project, here are the options I selected to generate the application. Notice that I selected the data-jpa, mqttv3, and oracle-cloud-sdk features to be included. 

Hot Tip! You can create a new GitHub repo directly from the code generated by Micronaut Launch and check out your application locally. 

Automatic Autonomous Wallets

Once I checked out my brand-new Micronaut application, the first change that I made was to add some configuration for automatic wallet download for my Autonomous DB connection. This is a new feature that was just added to the Oracle Cloud SDK module that makes life so much easier and I recently blogged about how to configure your app for auto wallet download, so check out that post for details. Basically, the automatic wallet download required me to add to blocks to the configuration file located at src/main/resources/application-local.yml. The first block is the configuration needed for the OCI SDK module, and since I have the OCI CLI installed locally the module is able to load and utilize the information in the config file for the CLI if we tell it the profile to use.

The second block that we have to add is to tell Micronaut which DB to use for our datasource. By configuring the DB OCID, username and password it has enough information to download and configure the datasource (even without a URL!).

That’s all the DB-related configuration that needs to be done. We’ll look at how the datasource is used down below.

Consuming The MQTT Topic

My application needs to be able to easily consume the messages that the hardware client publishes to the MQTT topic, so MQTT integration was a must for the framework on the server-side. Spoiler alert: Micronaut, of course, makes this straightforward via the micronaut-mqtt module. Like our datasource above, this requires a bit of configuration in my src/main/resources/application-local.yml file.

Environment Specific Config! You may have noticed that my configuration file has the -local suffix in it. This means I have to pass in the environment when I run the app via -Dmicronaut.environments=local, but I could have just as easily left it named application.yml and it would have automatically been applied. But I like to be explicit with the config, because it differentiates itself from the src/main/resources/application-oraclecloud.yml file that sits beside it. Since I have a slightly different config when I deploy the app, I like to keep a separate config file per environment and Micronaut is totally smart enough to know that it is running in the Oracle Cloud and apply the proper config file at runtime!

The only thing left to do at this point is to create a consumer class with a receive method that will be fired every time a new message is received on the given topic.  For now, this consumer is just receiving the message, but I’ll show you how it handles those messages in just a bit.

If we add a logger entry to our src/main/resources/logback.xml file and start the app up at this point, we can see each message as it is received.

Excellent! The application is now receiving messages every 10 seconds from the hardware! Let’s work on adding persistence to the application so that we can save each reading as it is received.

Persisting the Reading With Micronaut Data

Right. So, persistence, eh? Seems like it might be the part where things get tricky, but rest assured this part is just as uncomplicated as the rest of the project so far. For this, I’m using Micronaut Data (the JPA flavor, as opposed to the JDBC variety). Here’s the process:

Step 1 - Configuration

This is already done. Since we chose the data-jpa feature on Micronaut Launch and set up our config files above, all of our necessary dependencies and configuration is complete.

Step 2 - Create the Domain Entity

I created a class called Reading and added properties to represent each column in the database table. I then annotated the class with @Entity and @Table (the table annotation is only necessary because the domain class name is slightly different from the table name).  

The ID column is annotated with @Id and @GeneratedValue so Micronaut knows how that Oracle will handle generating the ID, and the @DateCreated annotation on the createdOn property tells Micronaut to timestamp the value at creation. Add a constructor and getters and setters and the domain entity is ready to go (I left them out above for brevity).

Step 3 - Create a Repository

Next, I created an interface that extends PageableRepository. This interface will be implemented at compile time since we’ve told Micronaut everything it needs to know about our table and datasource and the concrete methods will be available at runtime for all of our basic CRUD operations.

Step 4 - Using the Repository

At this point, the ReadingRepository is ready to be injected into controllers or other classes in the application and used for persisting new Reading objects. We’ll look at how that’s done in the next post in this series.

Summary

In this post, we looked at the DB schema, how I created the monolithic Micronaut application and consumed and persisted the sensor data. In the next post, we’ll look at the queries used for reporting on the data.

Photo by Clint Patterson on Unsplash

.cke_editable p { border: 1px dashed; min-height: 20px; border-radius: 5px; padding: 5px;} .backgroundOverlay, .delayedPopupWindow { display: none !important } .gist { border-left: none !important;} @media only screen and (max-width:640px){.gist .blob-num{padding: 1px 10px!important;} .gist table{display:table}.gist tr{display:table-row}.gist td{display:table-cell;border:1px solid #ddd}.gist table td:before{position: inherit;top:0;left:0;padding-right:0}} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; } .intro, .info, .success, .warning, .error { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); border: 1px solid; border-radius: 2px; margin: 10px 0px; padding:15px 10px;} .info { color: #00529b; background-color: #bde5f8;} .success { color: #4f8a10; background-color: #dff2bf;} .warning { color: #9f6000; background-color: #FEEFB3;} .error { color: #D8000C; background-color: #FFBABA;} .intro { color: #484848; background-color: #f5f5f5; border: 1px solid; } .shadow { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); }
let postContainer = document.querySelector('.cb11v2-posturltracking'); let minuteCount = 0; if( postContainer ) { minuteCount = Math.round(postContainer.textContent.split(" ").filter((t) => t != "" && t != "\n").length / 265); } if( postContainer && minuteCount && minuteCount > 0 ) document.querySelector('.u03-date').append(` • ${minuteCount} minute read`);

Project GreenThumb Part 2 - The Data Collection

Wed, 2021-03-24 07:00

Welcome to part 2 of my short series about Project GreenThumb, a hardware, software and cloud-based solution for monitoring and automating seedling growth. In my last post, I introduced you to the motivation and goals of the project and we looked at the hardware setup. In this post, I want to go a bit more in-depth about the Arduino code necessary to collect the environment data and publish it to a message queue so that it can later be consumed, persisted and visualized. Let’s get into it!

The Hardware Build & Schematic

I assembled the hardware by soldering the sensors mentioned in the last post to the NodeMCU board and leaving plenty of slack in each lead wire in order to ensure they would reach the seedling tray. Here’s a wiring diagram that shows which pins were used for which sensors. 

The Arduino Code

Thanks to the amazing Arduino community, I was able to rely on a number of libraries to read all of the sensors, output the data to the OLED display and publish the messages to MQTT. 

Reading Sensors

Some sensors are straightforward - just read the value using analogRead or digitalRead. 

Others are a bit more complex, requiring third-party libraries.

There’s nothing really complex about reading most sensors with Arduino and as I mentioned there are open source libraries that can help with just about every sensor out there. 

Need To See More? Don’t worry, I’ve published the entire client source code on GitHub.

Serializing Messages as JSON

I planned on publishing all of the sensor readings from each iteration to a message queue in the cloud as a single JSON object. Once the readings are obtained, JSON Serialization was done with the ArduinoJson library (https://arduinojson.org). 

Step 1 - Include the Library

Step 2 - Create a JSON Document & String to Hold Serialized Result

Step 3 - Set Document Values & Serialize

The MQTT Client

The MQTT Client by Adafruit was used to publish the messages to an MQTT topic running on RabbitMQ in an always free VM instance in the Oracle Cloud. 

Need A RabbitMQ Instance? Check out how to launch your own instance of the popular messaging queue on an always free instance in the Oracle Cloud!

I’m only doing one-way messaging (publishing sensor data), but I could quite easily modify the code to receive incoming messages as well. Here’s a simple overview of how to use the MQTT client:

Step 1 - Include the Library

Step 2 - Create the Client & Publisher

Step 3 - Create a Function for Connecting to the Client

Step 4 - Connect & Publish Message

I won’t cover every bit of the microcontroller code here, but the examples above should give you a basic idea of what it takes to read the sensors, serialize the data, and publish it to the MQTT topic. The full source code for the microcontroller project is available on GitHub if you’d like to check it out.

Summary

In this post, we looked at some of the code that I used to read and publish the sensor data to a message queue. In the next post, we’ll start to look at how I consumed that sensor data, persist and visualize it. 

Photo by Vishnu Mohanan on Unsplash

.cke_editable p { border: 1px dashed; min-height: 20px; border-radius: 5px; padding: 5px;} .backgroundOverlay, .delayedPopupWindow { display: none !important } .gist { border-left: none !important;} @media only screen and (max-width:640px){.gist .blob-num{padding: 1px 10px!important;} .gist table{display:table}.gist tr{display:table-row}.gist td{display:table-cell;border:1px solid #ddd}.gist table td:before{position: inherit;top:0;left:0;padding-right:0}} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; } .intro, .info, .success, .warning, .error { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); border: 1px solid; border-radius: 2px; margin: 10px 0px; padding:15px 10px;} .info { color: #00529b; background-color: #bde5f8;} .success { color: #4f8a10; background-color: #dff2bf;} .warning { color: #9f6000; background-color: #FEEFB3;} .error { color: #D8000C; background-color: #FFBABA;} .intro { color: #484848; background-color: #f5f5f5; border: 1px solid; } .shadow { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); }
let postContainer = document.querySelector('.cb11v2-posturltracking'); let minuteCount = 0; if( postContainer ) { minuteCount = Math.round(postContainer.textContent.split(" ").filter((t) => t != "" && t != "\n").length / 265); } if( postContainer && minuteCount && minuteCount > 0 ) document.querySelector('.u03-date').append(` • ${minuteCount} minute read`);

Project GreenThumb Part 1 - Automating & Monitoring Seedling Growth With Microcontrollers ...

Mon, 2021-03-22 07:00

It all started in the summer of 2020. I, like most others, had been isolating at home to minimize my chances of becoming infected by the coronavirus. The only times I left my home where to head to the store or market to obtain groceries or other essential supplies. It was on one of these trips when inspiration struck. With proper social distancing and mask, I walked past a booth at the local farmers market that was selling some hot chili (or, chile) peppers and I knew immediately what had to be done. I would have to make hot sauce! For me, it was the perfect project to undertake as it combined my love of spicy foods and the culinary arts into an endeavor that would give me some daily route to look forward to and keep my mind occupied each day when distractions for the growing boredom of being stuck at home were in short supply. I bought about 5 pounds of the perfumed and pungent peppers and prepared to produce a perfect product. I cleaned and roughly chopped the peppers, tossed them into large mason jars with some aromatics (garlic, onions, etc), covered them in a brine solution, weighted them down and waited. The fermentation process involved frequent monitoring to release the gas produced during the process and make sure that no “bad” bacteria entered the party. Several weeks later when the fermentation process had reached a favorable point, I drained, blended and cooked the mash into a spicy and savory blend that I bottled and shared with some friends. 

Thus was born a new obsession, and as the calendar turned into 2021 I knew that I would have to take things one step further and produce this year's sauce with peppers that I had grown myself from seed.  Of course, the prospect of home horticulture instantly led me to think of how I could integrate more of my passions - technology, automation and the cloud - into the growing and production process. So in early February, I set out to build an Arduino-based monitoring and automation solution for my seed-growing operation that would help me achieve my garden goals. Of course, I was aware of the untold number of gardening-related open source solutions built with various microcontrollers and single-board computers, but this project had to be my own from the start. I had nothing against any of those solutions or tutorials, and I’m not suggesting that my method is in any way superior to any of them, but I wanted to try to solve this problem as organically as possible to see what I could learn and accomplish without outside influence. I decided to share this project to inspire others to do the same with a problem that they face or perceive in their world because I feel that there is an inherent sense of pride and growth that comes along with learning about solutions by struggling through some of the problems one faces when crafting a solution from scratch. In a series of short blog posts, I'm going to walk you through the hardware, software and cloud side of what I’m calling “Project GreenThumb”, and I hope it teaches you something new and inspires you to take on a similar project. 

The Objective

Now let me get this out of the way right up front - I’m certainly no master horticulturalist, so I’m quite positive that someone who knows better will educate me on any incorrect assumptions that I’ve made below. That said, I did as I always do and made sure to do a fair bit of research before getting started and settled on these values as a result of that research. I chose 5 environmental attributes to monitor and control:

  • Air Temperature

  • Soil Temperature

  • Humidity 

  • Soil Moisture

  • Light 

Based on my experience, I knew these 5 elements would be rather easy to monitor via sensors connected to a microcontroller. My research led me to establish the following values as my targets for the seedlings during the time that they’d be incubating indoors:

With the objective established, it was time to assemble and program the microcontroller so that the planting phase could begin.

The Architecture

There was certainly a temptation for me to engineer an overly complex system for the task at hand (it’s what I always do), but this time my goal was to keep things as simple as possible while still providing valuable monitoring data and automating some portion of the operation. I decided to design a system that would regulate the soil temperature to stay within the goal range and to collect the other data to compare it against subsequent grow operations to see how I might improve the process in future growing seasons. I regret that I didn’t establish a manual “control” scenario (without any of the monitoring or automation) to have a scenario to compare my automation efforts against, but I simply didn’t have the extra space to grow (or transplant) another set of seedlings and didn’t want to waste resources just for comparison’s sake. 

The hardware portion of the project involves multiple sensors that would be attached to the NodeMCU ESP8266 microcontroller and a mini-OLED display for visual status reporting.

Lesson Learned! I should have used a capacitive soil moisture sensor instead of the resistive sensor that I chose to prevent issues with corrosion. Next time, I will certainly know better!

Once wired up and placed into a 3d printed enclosure, the microcontroller assembly looked like so.

The NodeMCU reads the sensors every 10 seconds, wraps the current readings in a JSON object and publishes that object to an MQTT topic on a RabbitMQ instance running on an “always free” VM in the cloud. A simple Micronaut application consumes the MQTT topic, persists the readings into a table in an Autonomous DB instance (also “always free”), and simultaneously pushes the readings to subscribers on a websocket endpoint. I decided to keep things uncomplicated and used a monolithic approach, so I serve the views via the same Micronaut application instead of creating a separate project. This keeps my infrastructure needs to a minimum and the code manageable. My views are responsive so they would look great on mobile, but instead of depending on a third-party framework like Bootstrap, I went with a “vanilla” CSS approach. Here’s a simple visualization of the architecture, to help you see the big picture of this small project.

Temperature regulation was handled by placing a seedling heat mat below the planted seedlings and turning the heat mat on and off via a relay inside the outlet that the mat is connected to. Yeah, they make thermostat-driven heat mats that perform this functionality, but what fun would it be to not automate that myself? 

Once the hardware was assembled and wired up, I planted the seeds and placed the seedling tray on the heat mat.

Summary

In this post, I introduced you to Project GreenThumb, a hardware, software and cloud-based solution for monitoring and automating seedling growth. We looked at the goals of the project and I introduced you to the basic architecture and hardware used in the project. In the next post, we’ll take a deeper look at the data collection process.

Feeling Inspired? If this post inspired you to build something similar, you can host all of the infrastructure that I use in this project in the “always free” tier of the Oracle Cloud. You get 2 free Autonomous DB instances, 2 free Virtual Machine instances, 10GB of Object Storage, and more! No time limits, no hidden fees, no nonsense. Just free, forever. Sign up today: https://www.oracle.com/cloud/free/

 

.cke_editable p { border: 1px dashed; min-height: 20px; border-radius: 5px; padding: 5px;} .backgroundOverlay, .delayedPopupWindow { display: none !important } .gist { border-left: none !important;} @media only screen and (max-width:640px){.gist .blob-num{padding: 1px 10px!important;} .gist table{display:table}.gist tr{display:table-row}.gist td{display:table-cell;border:1px solid #ddd}.gist table td:before{position: inherit;top:0;left:0;padding-right:0}} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; } .intro, .info, .success, .warning, .error { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); border: 1px solid; border-radius: 2px; margin: 10px 0px; padding:15px 10px;} .info { color: #00529b; background-color: #bde5f8;} .success { color: #4f8a10; background-color: #dff2bf;} .warning { color: #9f6000; background-color: #FEEFB3;} .error { color: #D8000C; background-color: #FFBABA;} .intro { color: #484848; background-color: #f5f5f5; border: 1px solid; } .shadow { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); }
let postContainer = document.querySelector('.cb11v2-posturltracking'); let minuteCount = 0; if( postContainer ) { minuteCount = Math.round(postContainer.textContent.split(" ").filter((t) => t != "" && t != "\n").length / 265); } if( postContainer && minuteCount && minuteCount > 0 ) document.querySelector('.u03-date').append(` • ${minuteCount} minute read`);

IaC in the Cloud: Integrating Terraform and Resource Manager into your CI/CD Pipeline - ...

Fri, 2021-03-19 07:00

Welcome to the final post in this series about using Terraform to manage infrastructure in the Oracle Cloud. In the last post, we looked at how to use the OCI CLI in our GitHub Actions pipeline to execute our Terraform scripts via creating stacks and jobs with Resource Manager. In this post, we’ll simplify the concept and make it a bit more portable by using native Terraform in our GitHub Actions pipeline. You’ll lose a bit of the power and flexibility of Resource Manager, but if you’re just looking to simply build and maintain your infrastructure, this solution is perfectly great for you!

If you've missed the previous posts in this series, here is a list to catch up:

Building Infrastructure From Your Pipeline

Just like in our last post, we’ll need some secret values so that we can execute our Terraform scripts from our CI/CD pipeline. Set some secrets for the following values from your tenancy. 

Running With Terraform

Using the OCI CLI to build our Terraform scripts via Resource Manager is nice, but if you remember from our last post, it wasn’t exactly a quick process since we had to install the CLI and all of the Terraform script execution happened in our cloud tenancy instead of on the pipeline/build server. Let’s see if we can improve the build times (and reduce a bit of the build script complexity) by executing our scripts natively in the pipeline. 

We’ll start by defining our pipeline as we did before in a file called build.yaml. 

Note: Like before, we’ll use the same GitHub project, but again branched:  https://github.com/recursivecodes/oci-resource-manager-demo/tree/github-actions-tf

We’ve defined our environment variables again, but this time we prefixed them with TF_VAR_ which, if you remember back to an earlier post in this series, is a special prefix that Terraform will pick up on and set our script variables accordingly. Next, checkout the code and configure the Hashicorp "setup-terraform" plugin which will install Terraform in our build environment.

That’s all the config we need. Now we can run our scripts directly via the Terraform CLI as we did earlier in this series when we ran them manually on our own machine. Add steps to initialize Terraform and validate our script(s):

Then run terraform plan and terraform apply.

Check in and push the build and once again the pipeline will be executed automatically. 

But this time, we get a much faster execution - from 3 minutes 17 seconds down to 13 total seconds. 

Summary

In this post, we looked at executing our Terraform scripts to build our infrastructure in our CI/CD pipeline using the native Terraform CLI. 

Series Summary

In this series, we have focused on Infrastructure as Code. From the very basic intro to Terraform for developers, to integrating our solution into our CI/CD pipeline we have dug deep into every aspect of automating our infrastructure and hopefully you have learned the basics and benefits of using using this solution in your cloud native applications. As always, please feel free to provide me your feedback and check me out on Twitter.

Photo by Pete Gontier on Unsplash

.cke_editable p { border: 1px dashed; min-height: 20px; border-radius: 5px; padding: 5px;} .backgroundOverlay, .delayedPopupWindow { display: none !important } .gist { border-left: none !important;} @media only screen and (max-width:640px){.gist .blob-num{padding: 1px 10px!important;} .gist table{display:table}.gist tr{display:table-row}.gist td{display:table-cell;border:1px solid #ddd}.gist table td:before{position: inherit;top:0;left:0;padding-right:0}} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; } .intro, .info, .success, .warning, .error { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); border: 1px solid; border-radius: 2px; margin: 10px 0px; padding:15px 10px;} .info { color: #00529b; background-color: #bde5f8;} .success { color: #4f8a10; background-color: #dff2bf;} .warning { color: #9f6000; background-color: #FEEFB3;} .error { color: #D8000C; background-color: #FFBABA;} .intro { color: #484848; background-color: #f5f5f5; border: 1px solid; } .shadow { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); }
let postContainer = document.querySelector('.cb11v2-posturltracking'); let minuteCount = 0; if( postContainer ) { minuteCount = Math.round(postContainer.textContent.split(" ").filter((t) => t != "" && t != "\n").length / 265); } if( postContainer && minuteCount && minuteCount > 0 ) document.querySelector('.u03-date').append(` • ${minuteCount} minute read`);

ACE Blog Posts: January 31 - February 6 -- OCI, SQL, EPM, and More

Tue, 2021-03-16 16:50

The following 34 members of the Oracle ACE Program have written 67 blog posts between January 31st and February 6th. From OCI to EPM, these blog posts are sure to give you plenty to reflect on.

Become an Oracle ACE

Oracle ACE Director Clarissa Maman OrfaliClarisa Maman Orfali
Founder/System Engineer, ClarTech Solutions, Inc.
Irvine, CA

 

Oracle ACE Director David KurtzDavid Kurtz
Consultant, Accenture Enkitec Group
London, United Kingdom

 

Oracle ACE Director Franck PachotFranck Pachot
Data Engineer, CERN
Lausanne, Switzerland

 

Oracle ACE Director Julian DontcheffJulian Dontcheff
Managing Director/Master Technology Architect, Accenture
Helsinki, Finland

 

Oracle ACE Director Oren NakdimonOren Nakdimon
Database Architect/Developer, Moovit
Tzurit, Israel

 

Oracle ACE Phil WilkinsPhil Wilkins
Senior Consultant, Capgemini
Reading, United Kingdom

 

Oracle ACE Director Ron EkinsRon Ekins
Oracle Solution Architect, Office of the CTO, Pure Storage
Haywards Heath, United Kingdom

 

Oracle ACE Director/Groundbreaker Ambassador Tim HallTim Hall
DBA, Developer, Author, Trainer, Various Companies
Birmingham, United Kingdom

 

Oracle ACE Director Wayne Van SluysWayne Van Sluys
Lead Consultant, interRel Consulting
St. Louis, Missouri 

 

Oracle ACE Jorg SobottkaJörg Sobottka
Senior Consultant Platform Services, Robotron Schweiz GmbH
Basel, Canton of Basel-Stadt, Switzerland

 

Oracle ACE Johannes MichlerJohannes Michler
Head of Business Unit BPM, Promatis Software GmbH
Ettlingen, Baden-Württemberg, Germany

 

Oracle ACE Julian FreyJulian Frey
Expert Database Consultant, Edorex AG
Eich, Canton of Lucerne, Switzerland

 

Oracle ACE Kazuhiro TakahashiKazuhiro Takahashi
Technical grade/ITSP database, NTT Data Corporation Tokyo, Japan

 

Mahmoud Rabie
Senior IT Solution Architect/Senior IT Trainer, Edifice Vision
Riyadh, Saudi Arabia

 

Oracle ACE Marco MischkeMarco Mischke
Group Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden.Germany

 

Oracle ACE Martien van den AkkerMartien van den Akker
Contractor: Fusion MiddleWare Implementation Specialist, Immigratie- en Naturalisatiedienst (IND)
The Hague, Netherlands

 

Oracle ACE Michael McLaughlinMichael McLaughlin
Professor, Brigham Young University - Idaho
Rexburg, Idaho

 

Oracle ACE Stefan PanekStefan Panek
Independent Consultant
Brühl, North Rhine-Westphalia, Germany

 

Oracle ACE Tercio CostaTercio Costa
Oracle Analyst, Indra Company
João Pessoa, Paraíba, Brazil

 

Yushan Bai
DBA, Hangzhou Daiwei Technology Co., Ltd.
Hangzhou, China

 

Oracle ACE Associate Cedric LeruthCedric Leruth
Oracle Technology Architect, BAPM
Geneva, Switzerland

 

Christian Gohmann
Principal Consultant/Instructor, Trivadis Germany GmbH
Gelsenkirchen, Germany

 

Oracle ACE Associate Gary GordhamerGary Gordhamer
Managing Principal Consultant, Viscosity North America
Milwaukee, Wisconsin

 

Oracle ACE Associate Nimish GargNimish Garg
Associate Director - Data Analytics and Insights, Gartner
Gurgaon, India

 

Oracle ACE Associate Omar ShubeilatOmar Shubeilat
Cloud Solution Architect EPM, PrimeQ (ANZ)
Sydney, Australia

 
Additional Resources

Image by Nico Becker from Pixabay

Announcing the 2021 Groundbreaker Ambassador Award Winners

Tue, 2021-03-16 15:07

We are excited to announce the Groundbreaker Ambassador 2021 award winners! These individuals are being recognized for their contributions in the developer community. Groundbreaker Ambassadors continuously share their knowledge and expertise through user group participation, blog posts, articles, conference presentations, social media, and many other channels.

Each name below links to a brief biography and social media resources so you can follow and connect with these community contributors and leaders.

Congratulations to the 2021 Groundbreaker Ambassadors!

Become a Groundbreaker Ambassador

Groundbreaker Ambassador Logo Additional Resources

IaC in the Cloud: Integrating Terraform and Resource Manager into your CI/CD Pipeline - ...

Mon, 2021-03-15 07:00

Welcome back to this series where we’re learning all about using Terraform and Resource Manager to manage your infrastructure in the Oracle Cloud. In our last post, we saw how to use GitHub Actions to create a distributable release that can be shared with other developers and learned how to add a ‘Deploy to Oracle Cloud’ button to our repo. In this post, we’re going to look at executing our Terraform scripts with GitHub Actions using the OCI CLI in our workflow.

Building Infrastructure From Your Pipeline

So far in this series we have focused on manually invoking our infrastructure builds, but in this post we’re finally going to look at some automation options to include our cloud provisioning in our CI/CD pipeline. 

Create Secrets

In order to execute our Terraform scripts from our CI/CD pipeline, we’re going to need to set some secrets in our GitHub repo. Set some secrets for the following values from your tenancy.

Running With Resource Manager via the OCI CLI

Now that our secrets are set, we can create a workflow with GitHub Actions that uses the OCI CLI to build our stack. We will declare a few environment variables that will be available to our job steps.

Note: We’ll be working with the same code that we started using earlier in this series. If you get stuck or would like to view the entire project, check out this branch on GitHub that focuses on building with the OCI CLI: https://github.com/recursivecodes/oci-resource-manager-demo/tree/github-actions-resource-manager

Note: We’ll need a GitHub token to use in our workflow so that we can use GitHub itself as a source provider, so see part 3 of this series if you haven’t created that token yet.

Add your GitHub Access Token as a secret in your GitHub repo:

Now let’s add a few steps to our build. The first step will simply check out the project codebase into the CI/CD pipeline working directory. Then we will write out our secret values that we set above to the config file that the OCI CLI expects, install the CLI itself and repair the permissions on the config file.

And now we can start using the OCI CLI to perform the work that we manually performed in the previous posts in this series. Let’s look at a quick overview of the process as a refresher:

  • Create GitHub Source Provider (if necessary)

  • Create Stack (if necessary)

  • Create/Execute Plan Job

  • Create/Execute Apply Job

We can perform all of these tasks with the CLI in our workflow. First, let’s check for the source provider and stack - storing the OCID in environment variables if they are retrieved.

Next, we’ll create the source provider and stack if they don’t exist (note the if conditional that prevents the step from running if the object’s ID exists in the correct environment variable).

Finally, we can create the plan and apply jobs.

Once the build.yaml file is complete, we commit and push it to the remote GitHub repo to trigger the build. Once the job has succeeded, we can check the logs.

Summary

We’ve fully automated the process of creating our source provider, stack and running jobs via our GitHub Actions CI/CD pipeline! Notice that the job in this case took 3 minutes and 17 seconds to run. The longest running steps were the installation of the CLI and the steps to create the plan and apply jobs. Can we somehow improve those times? Read the next (and final) post in this series to find out!

Photo by NOAA on Unsplash

.cke_editable p { border: 1px dashed; min-height: 20px; border-radius: 5px; padding: 5px;} .backgroundOverlay, .delayedPopupWindow { display: none !important } .gist { border-left: none !important;} @media only screen and (max-width:640px){.gist .blob-num{padding: 1px 10px!important;} .gist table{display:table}.gist tr{display:table-row}.gist td{display:table-cell;border:1px solid #ddd}.gist table td:before{position: inherit;top:0;left:0;padding-right:0}} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; } .intro, .info, .success, .warning, .error { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); border: 1px solid; border-radius: 2px; margin: 10px 0px; padding:15px 10px;} .info { color: #00529b; background-color: #bde5f8;} .success { color: #4f8a10; background-color: #dff2bf;} .warning { color: #9f6000; background-color: #FEEFB3;} .error { color: #D8000C; background-color: #FFBABA;} .intro { color: #484848; background-color: #f5f5f5; border: 1px solid; } .shadow { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); }
let postContainer = document.querySelector('.cb11v2-posturltracking'); let minuteCount = 0; if( postContainer ) { minuteCount = Math.round(postContainer.textContent.split(" ").filter((t) => t != "" && t != "\n").length / 265); } if( postContainer && minuteCount && minuteCount > 0 ) document.querySelector('.u03-date').append(` • ${minuteCount} minute read`);

How to build a Raspberry Pi webcam, plus the surprising links between Pi and Oracle Cloud ...

Fri, 2021-03-12 13:30

Do you need a better webcam, or a second or third webcam for live-streaming? I recently discovered how easy it is to build a webcam using the Raspberry Pi Zero and a Raspberry Pi camera. I'll take you through the steps like it's your first Pi project ever — because it was mine! Not only does it work great, it only costs $25.

This is an easy way to get started in the world of Raspberry Pi. But what does it have to do with Oracle developers, you ask? Well, it turns out Oracle engineers love Raspberry Pi for all sorts of hobby, enterprise and demonstration projects. In fact, the Raspberry Pi supercomputer (which I got to help  assemble in the final stages) made worldwide news.   

There’s a long history of running Java projects on Raspberry Pi, and Frank Delporte’s recent article in Java Magazine can start you down that road.

Oracle also offers Oracle Linux images for the Arm architecture, specifically for use with Raspberry Pi 4B, Raspberry Pi 3B and Raspberry Pi 3B+. Included in the development preview is Unbreakable Enterprise Kernel Release 6, based on the up-stream 5.4 kernel.

Finally, lest you think hobby computers have no utility for enterprise applications, this engineer needed a Site-2-Site Tunnel between a customer’s in-house data center and the OCI Frankfurt region. For ease of testing, he set up a separate VPN tunnel to his own development network using libreswan on a Raspberry Pi. (You can find the Oracle libreswan documentation here https://docs.oracle.com/en-us/iaas/Content/Network/Reference/libreswanCPE.htm).

 

Now, let’s get to the tutorial. Supplies:
  • Raspberry Pi Zero or Raspberry Pi Zero W (wireless)
  • Raspberry Pi Zero camera cable
  • Micro SD card
  • Raspberry Pi camera of any resolution
  • USB micro to regular USB cable
Steps:
  1. Connect your camera of choice (in my case, I am using the lowest quality camera available, the version 1) using the cable that fits the Raspberry Pi Zero. Use your fingernails to gently pry the corners of the cable clamps loose so they very slightly wiggle. Install the cable with the shiny metallic connections facing the surface of the board. Use your fingernails to push down on the clamps on either side to secure the cable at each end.
  2. Download and install the Raspberry Pi imager from https://www.raspberrypi.org/software/.
  3. Go to https://github.com/showmewebcam/showmewebcam and click “tags” to find the most recent version.
  4. Choose the latest image (I used v1.70) and download it.
  5. Put your SD card in your computer’s card reader.
  6. Open Raspberry Pi Imager. Choose “Custom” for the operating system and select the showmewebcam image you just downloaded (mine was called sdcard-raspberrypi0-v1.70.img). Choose your SD card and click “Write”.
  7. When the card has been formatted, remove it and put it in your Raspberry Pi Zero.
  8. Attach the USB cable to the port in the middle of the Raspberry Pi Zero, not the one closer to the end. Connect the USB cable to your computer.
  9. In Zoom, choose PiWebCam in your video preferences; or, in or OSB, choose a new source by clicking the + and “Video Capture Device”, and choosing PiWebCam in the next screen. Presto, it works!
  10. Now you may wish to build a small case for it. You can use cardboard or foam board, a 3D printed design or a wooden or plastic box, or buy a ready-made one if you’re using the high-quality camera. This one even has a little spirit level on it so that you can be sure your camera is positioned the way you want it.

Why it works:

The showmewebcam code is Linux-based firmware for the Raspberry Pi Zero that boots very quickly, lets your Pi Zero gain power from the computer like any other dongle, and tells your computer that what you have attached to it is a camera, not another computer. When you have connected the PiCam, the LEDs on the board and the camera both light up briefly. When the camera is ready for action, the green LED on the board blinks 3 times rapidly (that’s a tweakable setting included in the image.)

The Show-me webcam works with Linux, Windows 10 and Mac operating systems, and various video streaming services including OBS, Zoom, Teams, Jitsi, Firefox and Chrome.

To change the camera settings, use the following shell commands in Terminal:

  1. Discover the name of your specific camera by typing:
  2. ls -l /dev/tty.*
  3. Copy the number after /dev/ttyusbmodem (for example, screen /dev/tty.usbmodem141103).
  4. Type the following (adding 115200 after the number you found in the name of your Pi Zero device): screen /dev/tty.usbmodem141103 115200
  5. “Your webcam at your service!” appears. For piwebcam login, type root.  For password, type root again.
  6. Type this: /usr/bin/camera-ctl
  7. A small menu that lets you adjust the parameters of your camera pops up. You can make permanent or temporary changes (permanent ones require writing to the camera.txt file on the SD card).  Press S to save, and command Q to quit Terminal.

That’s it! I want to give credit to David Hunt for first popularizing this build. It has since been simplified, as evidenced in this tutorial, thanks to Hunt and the other contributors of the showmewebcam Linux firmware. Enjoy your new camera!

IaC in the Cloud: Integrating Terraform and Resource Manager into your CI/CD Pipeline - Release ...

Fri, 2021-03-12 07:00

Welcome back to this series where we’re learning all about using Terraform and Resource Manager to manage your infrastructure in the Oracle Cloud. In part 1, we learned the basics of Terraform that a developer needs to know to get started working with it. In part 2, we installed and ran our very first Terraform script. Part 3 took us into the cloud and we ran our first Terraform script with Resource Manager and we took it a step further in part 4 where we introduced variable input and validation in to the process with Resource Manager schema documents. In this post, we’re going to start wrapping up the series by talking about how we might distribute our Terraform Infrastructure as Code and how we can integrate Terraform and Resource Manager into our build process via our CI/CD pipeline. We’ll look specifically at GitHub and GitHub Actions, but certainly the process would be similar for other CI/CD tools. 

Adding a Workflow To Create a Distributable Stack With GitHub Actions

The first quick thing I want to cover in this post is creating a distributable stack from your Terraform scripts. Basically what that means is creating a .zip file that you can share with other developers so that they can run your script in their own tenancy. Certainly the archive can be used in your own tenancy, but, we’ll look at better solutions for that down below. Creating a zip with GitHub Actions is pretty easy, just add a YAML file in your repository in the directory .github/workflows and call it build-stack.yaml. 

The gist of this workflow is that every time a new ’tag’ version is pushed to GitHub, the workflow will:

  • Checkout the code

  • Zip everything in the root

  • Publish the zip as an artifact on GitHub

  • Create a GitHub release

  • Upload the published artifact as a release

Here’s the YAML file.

For my demo repo, I saved this file and tagged and pushed to GitHub. Once the job is complete, you can go to your “Releases” page for the repo and see that the zip has been published.

Tip:  Your latest releases are always found at https://github.com/[user name]/[repo name]/releases/latest.

Adding a “Deploy to Oracle Cloud” Button to Your Project

Now that you’ve got a nice zip that others can use, why not add a simple ‘Deploy to Oracle Cloud’ button to your README.md so that others can deploy your stack with one click? Just follow the format defined in the docs and paste the markup into your README.md.

Check out my repo for this blog post to see it in action at https://github.com/recursivecodes/oci-resource-manager-demo.

Summary

This was a quick, but I think important blog post in this Terraform series where we learned how to use GitHub Actions to create and publish a release asset and add a ‘Deploy to Oracle Cloud’ button to our GitHub repos. In the next post, we’ll look at how we can create our Resource Manager stacks and jobs in our pipeline and apply those jobs directly from our CI/CD workflow. 

Photo by Nazrin B-va on Unsplash

.cke_editable p { border: 1px dashed; min-height: 20px; border-radius: 5px; padding: 5px;} .backgroundOverlay, .delayedPopupWindow { display: none !important } .gist { border-left: none !important;} @media only screen and (max-width:640px){.gist .blob-num{padding: 1px 10px!important;} .gist table{display:table}.gist tr{display:table-row}.gist td{display:table-cell;border:1px solid #ddd}.gist table td:before{position: inherit;top:0;left:0;padding-right:0}} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; } .intro, .info, .success, .warning, .error { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); border: 1px solid; border-radius: 2px; margin: 10px 0px; padding:15px 10px;} .info { color: #00529b; background-color: #bde5f8;} .success { color: #4f8a10; background-color: #dff2bf;} .warning { color: #9f6000; background-color: #FEEFB3;} .error { color: #D8000C; background-color: #FFBABA;} .intro { color: #484848; background-color: #f5f5f5; border: 1px solid; } .shadow { -webkit-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); -moz-box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); box-shadow: 4px 4px 9px -1px rgba(102,102,102,1); }
let postContainer = document.querySelector('.cb11v2-posturltracking'); let minuteCount = 0; if( postContainer ) { minuteCount = Math.round(postContainer.textContent.split(" ").filter((t) => t != "" && t != "\n").length / 265); } if( postContainer && minuteCount && minuteCount > 0 ) document.querySelector('.u03-date').append(` • ${minuteCount} minute read`);

ACE Blog Posts: January 24 - January 30 -- APEX, Cloud, MySQL, and More

Thu, 2021-03-11 14:59

The 44 featured blog posts below were written by 24 members of the Oracle ACE Program. From APEX to MySQL, there is plenty of knowledge to gain. So branch out and enjoy these insightful blog posts.

Become an Oracle ACE

Fanggang Wang
Architect, JA Solar
Hangzhou, China

 

Oracle ACE Director Oren NakdimonOren Nakdimon
Database Architect/Developer, Moovit
Tzurit, Israel

 

Oracle ACE Director/Groundbreaker Ambassador Tim HallTim Hall
DBA, Developer, Author, Trainer, Various Companies
Birmingham, United Kingdom

 

Oracle ACE Atul KumarAtul Kumar
Founder & CTO, K21 Academy
London, United Kingdom

 

Oracle ACE Erman ArslanErman Arslan
Senior Director, Database and Systems Management, GTech
Istanbul, Turkey

 

Himanshu Singh
Architect
Greater Noida West (Noida), India

 

Oracle ACE Johannes AhrendsJohannes Ahrends
Executive Director, CarajanDB
Cologne, Germany

 

Mahmoud Rabie
Senior IT Solution Architect/Senior IT Trainer, Edifice Vision
Riyadh, Saudi Arabia

 

Oracle ACE Martien van den AkkerMartien van den Akker
Contractor: Fusion MiddleWare Implementation Specialist, Immigratie- en Naturalisatiedienst (IND)
The Hague, Netherlands

 

Oracle ACE Michael DinhMichael Dinh
Oracle DBC, Pythian
Oceanside, California

 

Oracle ACE Michael McLaughlinMichael McLaughlin
Professor, Brigham Young University - Idaho
Rexburg, Idaho

 

Oracle ACE Noriyoshi ShinodaNoriyoshi Shinoda
Architect, Hewlett Packard Enterprise
Tokyo, Japan
 

 

Oracle ACE Rustam KhodjaevRustam Khodjaev
Founder/CEO, A1 Project
Dushanbe, Tajikistan

 

Satoshi Mitani
Database Platform Technical Lead, Yahoo! JAPAN
Tokyo, Japan

 

Oracle ACE Sean StuberSean Stuber
Database Analyst, American Electric Power
Columbus, Ohio

 

Carlos Adriano Tanaka Bezerra
DBA/Cloud Architect, Accerte
Goiânia, Brazil

 

Oracle ACE Associate Hassan Abd ElrahmanHassan Abd Elrahman
Senior Oracle Technical Consultant, Cloud Solutions
Saudi Arabia

 

Oracle ACE Associate Karkuvelraja ThangamariappanKarkuvelraja Thangamariappan
Oracle Certified Expert APEX Developer, DAMAC Properties
Dubai, United Arab Emirates

 

Xingxing Xi
DBA, Beijing HT Horizon Technology
Xi'an, China

 
Additional Resources

Image by Marco Federmann from Pixabay

Creating An ATP Instance With The OCI Service Broker

Mon, 2019-06-10 08:03
.gist { border-left: none !important;} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; }

We recently announced the release of the OCI Service Broker for Kubernetes, an implementation of the Open Service Broker API that streamlines the process of provisioning and binding to services that your cloud native applications depend on.

The Kubernetes documentation lays out the following use case for the Service Catalog API:

An application developer wants to use message queuing as part of their application running in a Kubernetes cluster. However, they do not want to deal with the overhead of setting such a service up and administering it themselves. Fortunately, there is a cloud provider that offers message queuing as a managed service through its service broker.

A cluster operator can setup Service Catalog and use it to communicate with the cloud provider’s service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster. The application developer therefore does not need to be concerned with the implementation details or management of the message queue. The application can simply use it as a service.

Put simply, the Service Catalog API lets you manage services within Kubernetes that are not be deployed within Kubernetes.  Things like messaging queues, object storage and databases can be deployed with a set of Kubernetes configuration files without needing knowledge of the underlying API or tools used to create those instances thus simplifying the deployment and making it portable to virtually any Kubernetes cluster.

The current OCI Service Broker adapters that are available at this time include:

  • Autonomous Transaction Processing (ATP)
  • Autonomous Data Warehouse (ADW)
  • Object Storage
  • Streaming

I won't go into too much detail in this post about the feature, as the introduction post and GitHub documentation do a great job of explaining service brokers and the problems that they solve. Rather, I'll focus on using the OCI Service Broker to provision an ATP instance and deploy a container which has access to the ATP credentials and wallet.  

To get started, you'll first have to follow the installation instructions on GitHub. At a high level, the process involves:

  1. Deploy the Kubernetes Service Catalog client to the OKE cluster
  2. Install the svcat CLI tool
  3. Deploy the OCI Service Broker
  4. Create a Kubernetes Secret containing OCI credentials
  5. Configure Service Broker with TLS
  6. Configure RBAC (Role Based Access Control) permissions
  7. Register the OCI Service Broker

Once you've installed and registered the service broker, you're ready to use the ATP service plan to provision an ATP instance. I'll go into details below, but the overview of the process looks like so:

  1. Create a Kubernetes secret with a new admin and wallet password (in JSON format)
  2. Create a YAML configuration for the ATP Service Instance
  3. Deploy the Service Instance
  4. Create a YAML config for the ATP Service Binding
  5. Deploy the Service Binding to obtain which results in the creation of a new Kubernetes secret containing the wallet contents
  6. Create a Kubernetes secret for Microservice deployment use containing the admin password and the wallet password (in plain text format)
  7. Create a YAML config for the Microservice deployment which uses an initContainer to decode the wallet secrets (due to a bug which double encodes them) and mounts the wallet contents as a volume

Following that overview, let's take a look at a detailed example. The first thing we'll have to do is make sure that the user we're using with the OCI Service Broker has the proper permissions.  If you're using a user that is a member of the group devops then you would make sure that you have a policy in place that looks like this:

Allow group devops to manage autonomous-database in compartment [COMPARTMENT_NAME]

The next step is to create a secret that will be used to set some passwords during ATP instance creation.  Create a file called atp-secret.yaml and populate it similarly to the example below.  The values for password and walletPassword must be in the format of a JSON object as shown in the comments inline below, and must be base64 encoded.  You can use an online tool for the base64 encoding, or use the command line if you're on a Unix system (echo '{"password":"Passw0rd123456"}' | base64).

Now create the secret via: kubectl create -f app-secret.yaml.

Next, create a file called atp-instance.yaml and populate as follows (updating the name, compartmentId, dbName, cpuCount, storageSizeTBs, licenseType as necessary).  The paremeters are detailed in the full documentation (link below).  Note, we're referring to the previously created secret in this YAML file.

Create the instance with: kubectl create -f atp-instance.yaml. This will take a bit of time, but in about 15 minutes or less your instance will be up and running. You can check the status via the OCI console UI, or with the command: svcat get instances which will return a status of "ready" when the instance has been provisioned.

Now that the instance has been provisioned, we can create a binding.  Create a file called atp-binding.yaml and populate it as such:

Note that we're once again using a value from the initial secret that we created in step 1. Apply the binding with: kubectl create -f atp-binding.yaml and check the binding status with svcat get bindings, looking again for a status of "ready". Once it's ready, you'll be able to view the secret that was created by the binding via: kubectl get secrets atp-demo-binding -o yaml where the secret name matches the 'name' value used in atp-binding.yaml. The secret will look similar to the following output:

This secret contains the contents of your ATP instance wallet and next we'll mount these as a volume inside of the application deployment.  Let's create a final YAML file called atp-demo.yaml and populate it like below.  Note, there is currently a bug in the service broker that double encodes the secrets, so it's currently necessary to use an initContainer to get the values properly decoded.

Here we're just creating a basic alpine linux instance just to test the service instance. Your application deployment would use a Docker image with your application, but the format and premise would be nearly identical to this. Create the deployment with kubectl create -f atp-demo.yaml and once the pod is in a "ready" state we can launch a terminal and test things out a bit:

Note that we have 3 environment variables available in the instance:  DB_ADMIN_USER, DB_ADMIN_PWD and WALLET_PWD.  We also have a volume available at /db-demo/creds containing all of our wallet contents that we need to make a connection to the new ATP instance.

Check out the full instructions for more information or background on the ATP service broker. The ability to bind to an existing ATP instance is scheduled as an enhancement to the service broker in the near future, and some other exciting features are planned.

How to Install Oracle Java in Oracle Cloud Infrastructure

Fri, 2019-06-07 11:10
Oracle Java Support and Updates Included in Oracle Cloud Infrastructure

We recently announced that Oracle Java, Oracle’s widely adopted and proven Java Development Kit, is now included with Oracle Cloud Infrastructure subscriptions at no extra cost.

In this blog post I show how to install Oracle Java on Oracle Linux running in an OCI compute shape by using RPMs available yum servers available within OCI.

Installing Oracle Java

The Oracle Java RPMs are in the ol7_oci_included repository on Oracle Linux yum server accessible from within OCI.

To enable this repository:

$ sudo yum install -y --enablerepo=ol7_ociyum_config oci-included-release-el7

As of this writing, the repository containst Oracle Java 8, 11 and 12.

$ yum list jdk* Loaded plugins: langpacks, ulninfo Available Packages jdk-11.0.3.x86_64 2000:11.0.3-ga ol7_oci_included jdk-12.0.1.x86_64 2000:12.0.1-ga ol7_oci_included jdk1.8.x86_64 2000:1.8.0_211-fcs ol7_oci_included

To install Oracle Java 12, version 12.0.1:

$ sudo yum install jdk-12.0.1

To confirm the Java version:

$ java -version java version "12.0.1" 2019-04-16 Java(TM) SE Runtime Environment (build 12.0.1+12) Java HotSpot(TM) 64-Bit Server VM (build 12.0.1+12, mixed mode, sharing) Multiple JDK versions and setting the default

If you install multiple version of the JDK, you may want to set the default version using alternatives. For example, let’s first install Oracle Java 8:

$ sudo yum install -y jdk1.8

The alternatives command shows that two programs provide java:

$ sudo alternatives --config java There are 2 programs which provide 'java'. Selection Command ----------------------------------------------- *+ 1 /usr/java/jdk-12.0.1/bin/java 2 /usr/java/jdk1.8.0_211-amd64/jre/bin/java

Choosing selection 2 sets the default to JDK 1.8 (Oracle Java 8):

$ java -version java version "1.8.0_211" Java(TM) SE Runtime Environment (build 1.8.0_211-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode) Conclusion

Oracle Cloud Infrastructure includes Oracle Java —with support and updates— at no additional cost. By providing Oracle Java RPMs in OCI’s yum servers, installation is greatly simplified.

Build and Deploy a Golang Application Using Oracle Developer Cloud

Fri, 2019-06-07 10:36

Golang recently became a trending programming language in the developer community. This blog will help you develop, build, and deploy your first Golang-based REST application using Docker and Kubernetes on Oracle Developer Cloud.

Before getting our first Golang application up and running, let’s examine Golang a little.

What is Golang?

Golang, or Go for short, is an open source programming language that is a statically-typed, compiled all-purpose programming language. It is fast and supports concurrency and cross-platform compilation. To learn more about Go, visit the following link:

https://golang.org/

Let’s get set and Go

To develop, build, and deploy a Golang-based application, you’ll need to create the following files on your machine:

  • main.go - Contains the Go application code and the listener
  • Dockerfile - Builds the Docker image for the Go application code
  • gorest.yml – A YAML file that deploys the Docker image of the Go application on Oracle Container Engine for Kubernetes

Here are the code snippets for the files mentioned above.

main.go

This file imports the required packages and defines the handler() function for the request, which is called by the main() function, where the http listener port is defined. As the name itself suggests, the errorHandler() function comes into play when an error occurs.

package main import ( "fmt" "log" "net/http" "os" ) func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hello %s!", r.URL.Path[1:]) fmt.Println("RESTfulServ. on:8093, Controller:",r.URL.Path[1:]) } func main() { http.HandleFunc("/", handler) fmt.Println("Starting Restful services...") fmt.Println("Using port:8093") err := http.ListenAndServe(":8093", nil) log.Print(err) errorHandler(err) } func errorHandler(err error){ if err!=nil { fmt.Println(err) os.Exit(1) } }

Dockerfile

This Dockerfile pulls the latest Go Docker image from DockerHub, creates an app folder in the container, and then adds all the application files on the build machine(from Git repository cloning) to the app folder in the container. Next, it makes the app directory the working directory and runs the go build command to build the Go code and execute the main file.

 

FROM golang:latest RUN mkdir /app ADD . /app/ WORKDIR /app RUN go build -o main . CMD ["/app/main"]

 

gorest.yml

The script shown below defines the Kubernetes service and deployment, including the respective names, ports, and Docker image that will be downloaded from the DockerHub registry and deployed on the Kubernetes cluster. In the script, we defined the service and deployment as gorest-se, the port as 8093, and the container image as <DockerHub username>/gorest:1.0

kind: Service apiVersion: v1 metadata: name: gorest-se labels: app: gorest-se spec: type: NodePort selector: app: gorest-se ports: - port: 8093 targetPort: 8093 name: http --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: gorest-se spec: replicas: 1 template: metadata: labels: app: gorest-se version: v1 spec: containers: - name: gorest-se image: abhinavshroff/gorest:1.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8093 ---

 

Create a Git repository in the Oracle Developer Cloud Project

To create a Git repository in the Developer Cloud project, navigate to the Project Home page and then click the +Create Repository button, found on the right-hand side of the page. In the New Repository dialog, enter GoREST for the repository Name and select Empty Repository for the Initial Content option, as shown. Then, click the Create button.

 

You should now see the GoREST.git repository created in the Repositories tab on the Project Home page. Click the Clone dropdown and then click the copy icon, as shown in the screen shot, to copy the Git repository HTTPS URL. Keep this URL handy.

 

Push the code to the Git Repository

Now, in your command prompt window, navigate to the GoREST application folder and execute the following Git commands to push the application code to the Git repository you created.

Note: You need to have gitcli installed on your development machine to execute Git commands. Also, you’ll be using the Git URL that you just copied from the Repositories tab, as previously mentioned.

git init

git add --all

git commit -m "First commit"

git remote add origin <git repository url>

git push origin master

Your GoREST.git repository should have the structure shown below.

 

 

Configure the Build Job

In Developer Cloud, select Builds in the left navigation bar to display the Builds page. Then click the +Create Job button. 

In the New Job dialog, enter BuildGoRESTAppl for the Name and select a Template that has the Docker runtime. Then click the Create button. This build job will build the Docker image for the Go REST application code in the Git repository and push it to the DockerHub registry.

In the Git tab, select Git from the Add Git dropdown, select GoREST.git as the Git repository and, for the branch, select master.

In the Steps tab, use the Add Step dropdown to add Docker login, Docker build, and Docker push steps.

In the Docker login step, provide your DockerHub Username and Password. Leave the Registry Host empty, since we’re using DockerHub as the registry.

In the Docker build step, enter <DockerHub Username>/gorest for the Image Name and 1.0 for the Version Tag. The full image name shown is <DockerHub Username>/gorest:1.0

In the Docker push step, enter <DockerHub Username>/gorest for the Image Name and 1.0 for the Version Tag. Then click the Save button.

To create another build job for deployment, navigate to the Builds page and click the +Create Job button. 

In the New Job dialog enter DeployGoRESTAppl for the Name, select the template with Kubectl, then click the Create button. This build job will deploy the Docker image built by the BuildGoRESTAppl build job to the Kubernetes cluster.

The first thing you’ll do to configure the DeployGoRESTAppl build job is to specify the repository where the code is found and select the branch where you’ll be working on the files.  To do this, in the Git tab, add Git from the dropdown, select GoREST.git as the Git repository and, for the branch, select master.

In the Steps tab, select OCIcli from the Add Step dropdown. Take a look at this blog link to see how and where to get the values for the OCIcli configuration. Then, select Unix Shell from the Add Step dropdown and, in the Unix Shell build step, enter the following script.

 

mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id --file $HOME/.kube/config --region us-ashburn-1 export KUBECONFIG=$HOME/.kube/config kubectl create -f gorest.yml sleep 30 kubectl get services gorest-se kubectl get pods kubectl describe pods

 

When you’re done, click the Save button.

 

Create the Build Pipeline

Navigate to the Pipelines tab in the Builds page. Then click the +Create Pipeline button.

In the Create Pipeline dialog, you can enter the Name as GoApplPipeline. Then click the Create button.

 

Drag and drop the BuildGoRESTAppl and DeployGoRESTAppl build jobs and then connect them.

 

Double click the link that connects the build jobs and select Successful as the Result Condition. Then click the Apply button.

 

Then click on the Save button.

 

Click the Build button, as shown, to run the build pipeline. The BuildGoRESTAppl build job will be executed first and, if it is successful, then the DeployGoRESTAppl build job that deploys the container on the Kubernetes cluster on Oracle Cloud will be executed next.

 

After the jobs in the build pipeline finish executing, navigate to the Jobs tab and click the link for the DeployGoRESTAppl build job.  Then click the Build Log icon for the executed build.

 

You should see messages that the service and deployment were successfully created.  Search the log for the gorest-se service and deployment that were created on the Kubernetes cluster, and find the public IP address and port to access the microservice, as shown below.

 

Enter the IP address and port that you retrieved from the log, into the browser using the format shown in the following URL:

http://<retrieved IP address>:<retrieved port>/<your name>

You should see the “Hello <your name>!” message in your browser.

 

So, you’ve seen how Oracle Developer Cloud can help you manage the complete DevOps lifecycle for your Golang-based REST applications and how out-of-the-box support for Build and Deploy to Oracle Container Engine for Kubernetes makes it easier.

To learn more about other new features in Oracle Developer Cloud, take a look at the What's New in Oracle Developer Cloud Service document and explore the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud Slack channel or in the online forum.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Get Up to Speed with Oracle ACEs on the Kscope Database Track

Thu, 2019-06-06 08:03
All Aboard for Database Expertise...

This second post in a series on Kscope 2019 sessions presented by members of the Oracle ACE program focuses on the database track. Kscope 2019 arrives on time, June 23-27 in Seattle. Click here for information and registration.

Click the session titles below for time, date, and location information for each session.

We'll cover sessions in the other tracks in upcoming posts. Stay tuned!

 

Oracle ACE Directors

Oracle ACE Director Alex NuijtenAlex Nuijten
Director, Senior Oracle Developer, allAPEX
Oosterhout, Netherlands

 

Oracle ACE Director Debra LilleyDebra Lilley
Associate Director, Accenture
Belfast, United Kingdom

 

Oracle ACE Director Dimitri GielisDimitri Gielis
Director, APEX R&D
Leuven, Belgium

 

Oracle ACE Director Francisco AlvarezFrancisco Munoz Alvarez
CEO, CloudDB
Sydney, Australia

 

Oracle ACE Director Heli HelskyahoHeli Helskyaho
CEO, Miracle Finland Oy
Finland

 

Oracle ACE Director Jim CzuprynskiJim Czuprynski
Senior Enterprise Data Architect, Viscosity North America
Bartlett, Illinois

 

Oracle ACE Director Kim Berg HansenKim Berg Hansen
Senior Consultant, Trivadis
Funen, Denmark

 

Oracle ACE Director Martin Giffy D'SouzaMartin Giffy D’Souza
Director of Innovation, Insum Solutions
Calgary, Alberta, Canada

 

Oracle ACE Director Mia UrmanMia Urman
CEO, AuraPlayer Ltd
Brookline, Massachusetts

 

Oracle ACE Director Patrick BarelPatrick Barel
Sr. Oracle Developer, Alliander via Qualogy
Haarlem, Netherlands

 

Oracle ACE Director Peter KoletzkePeter Koletzke
Technical Director, Principal Instructor
Independent Consultant

 

Oracle ACE Director Richard NiemiecRichard Niemiec
Chief Innovation Officer, Viscosity North America
Chicago, Illinois

 
Oracle ACEs

Oracle ACE Dani SchniderDani Schnider
Senior Principal Consultant, Trivadis AG
Zurich, Switzerland

 

Oracle ACE Holger FriedrichHolger Friedrich
CTO, sumIT AG
Zurich, Switzerland

 

Oracle ACE Liron AmitziLiron Amitzi
Senior Database Consultant, Self Employed
Vancouver, Canada

 

Oracle ACE Philipp SalvisbergPhilipp Salvisberg
Senior Principal Consultant, Trivadis AG
Zurich, Switzerland

 

Oracle ACE Robert MarzRobert Marz
Principal Technical Architect, its-people GmbH
Frankfurt, Germany

 
Oracle ACE Associates

Oracle ACE Associate Alfredo AbateAlfredo Abate
Senior Oracle Systems Architect, Brake Parts Inc LLC
McHenry, Illinois

 

Oracle ACE Associate Eugene FedorenkoEugene Fedorenko
Senior Architect, Flexagon
De Pere, Wisconsin

 
Additional Resources

Pages