{"id":1418,"date":"2020-05-29T12:50:29","date_gmt":"2020-05-29T10:50:29","guid":{"rendered":"https:\/\/blog.besharp.it\/?p=1418"},"modified":"2021-03-24T16:49:00","modified_gmt":"2021-03-24T15:49:00","slug":"part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service","status":"publish","type":"post","link":"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/","title":{"rendered":"Part II: Python logging best practices and how to integrate with Kibana Dashboard through AWS Kinesis Stream and Amazon ElasticSearch Service"},"content":{"rendered":"

In this second part of our journey (missed Part 1? Read it here!<\/a>) covering the secrets and practices of Python\u2019s logging, we\u2019ll go a step further: managing multiple app instances (thus log streams), which is a pretty normal scenario in cloud projects, to aggregate logs using Kinesis Stream, ElasticSearch and Kibana Dashboard. Let\u2019s go!<\/span><\/p>\n

Go aggregate your logs!<\/span><\/h2>\n

In those cases where it comes to monitor a complex application, striving to collect distributed logs to get an understanding of what went wrong with your code is not a clever idea.<\/span><\/p>\n

Let\u2019s say you have implemented a Serverless REST API through AWS API Gateway proxy-integrated with AWS Lambda Functions, written in Python, for each of the endpoints you have defined. Given what we already covered before, in this case logs will be presumably written to AWS CloudWatch log streams through a StreamHandler.\u00a0<\/span><\/p>\n

But what if, instead of searching for log records inside CloudWatch log streams, you would like to analyze them from a centralized dashboard? Well, in this case, the answer is called <\/span>EKK<\/b> stack (Amazon <\/span>E<\/b>lasticsearch Service, Amazon <\/span>Kinesis Data Firehose<\/b>, and <\/span>K<\/b>ibana).<\/span><\/p>\n

Before going into the details of the stack configuration, let\u2019s introduce the role of each of the stack\u2019s actors. The following is just an introduction of the services used for this log, search, and analytics solution. If you want to get more information about each of the services used, we invite you to consult their dedicated documentation.<\/span><\/p>\n

EKK stack\u2019s actors<\/span><\/h2>\n

Amazon Elasticsearch Service<\/b> is a managed service that allows you to deploy, operate, and scale an Elasticsearch cluster in your AWS account. It provides a search and analytics engine that you can exploit to monitor your application\u2019s logs in real-time.<\/span><\/p>\n

Amazon Elasticsearch Service has built-in integration with <\/span>Kibana<\/b>. Kibana is a tool that provides you an easy to use dashboard where you can monitor and debug your application in a centralized way.<\/span><\/p>\n

Amazon Kinesis Data Firehose<\/b> is the service that acts as a bridge between your log records producers and your Elasticsearch cluster. Kinesis Data Firehose allows you to load streaming data to one or more specific targets. In the solution proposed in this article, Kinesis Data Firehose is used to stream log records produced by different and distributed application components to an Elasticsearch cluster and to an S3 bucket, both hosted in your AWS account. The S3 bucket is used as a backup of your log records and can be used to retrieve historical data.\u00a0<\/span><\/p>\n

For what concerns the Python ecosystem, you can rely on <\/span>AWS boto3 SDK<\/b> to stream local logs directly to a Kinesis Data Firehose delivery stream. Combining the Python\u2019s logging module with the boto3 SDK, you can stream your logs to Kinesis Data Firehose. In the next section, we will see how to implement a Python logging module\u2019s handler that will load a JSON version of your log records to a Kinesis Data Firehose delivery stream.<\/span><\/p>\n

Extend Python\u2019s logging module<\/span><\/h2>\n

Thanks to the extensible nature of Python\u2019s logging module, it is possible to implement a custom handler that meets our needs. In this section, we will illustrate how to implement a StreamHandler that streams log data to a Kinesis Data Firehose delivery stream.<\/span><\/p>\n

Here\u2019s the implementation:<\/span><\/p>\n

import boto3\r\nimport logging\r\n\r\n\r\nclass KinesisFirehoseDeliveryStreamHandler(logging.StreamHandler):\r\n\r\n   def __init__(self):\r\n       # By default, logging.StreamHandler uses sys.stderr if stream parameter is not specified\r\n       logging.StreamHandler.__init__(self)\r\n\r\n       self.__firehose = None\r\n       self.__stream_buffer = []\r\n\r\n       try:\r\n           self.__firehose = boto3.client('firehose')\r\n       except Exception:\r\n           print('Firehose client initialization failed.')\r\n\r\n       self.__delivery_stream_name = \"logging-test\"\r\n\r\n   def emit(self, record):\r\n       try:\r\n           msg = self.format(record)\r\n\r\n           if self.__firehose:\r\n               self.__stream_buffer.append({\r\n                   'Data': msg.encode(encoding=\"UTF-8\", errors=\"strict\")\r\n               })\r\n           else:\r\n               stream = self.stream\r\n               stream.write(msg)\r\n               stream.write(self.terminator)\r\n\r\n           self.flush()\r\n       except Exception:\r\n           self.handleError(record)\r\n\r\n   def flush(self):\r\n       self.acquire()\r\n\r\n       try:\r\n           if self.__firehose and self.__stream_buffer:\r\n               self.__firehose.put_record_batch(\r\n                   DeliveryStreamName=self.__delivery_stream_name,\r\n                   Records=self.__stream_buffer\r\n               )\r\n\r\n               self.__stream_buffer.clear()\r\n       except Exception as e:\r\n           print(\"An error occurred during flush operation.\")\r\n           print(f\"Exception: {e}\")\r\n           print(f\"Stream buffer: {self.__stream_buffer}\")\r\n       finally:\r\n           if self.stream and hasattr(self.stream, \"flush\"):\r\n               self.stream.flush()\r\n\r\n           self.release()\r\n<\/pre>\n

As you can see, and to be more specific, the provided example shows a class, the KinesisFirehoseDeliveryStreamHandler, that inherits the behavior of the native StreamHandler class. The StreamHandler\u2018s methods that were customized are emit and flush.<\/p>\n

The emit method is responsible for invoking the format method, adding log records to the stream, and invoking the flush method. How log data is formatted depends on the type of formatter configured for the handler. Regardless of how it is formatted, log data will be appended to the __stream_buffer array or, in case something went wrong during Firehose client\u2019s initialization, to the default stream, i.e. sys.stderr.<\/p>\n

The flush method is responsible for streaming data directly into the Kinesis Data Firehose delivery stream through the put_record_batch API. Once records are streamed to the Cloud, local _stream_buffer will be cleared. The last step of the flush method consists of flushing the default stream.<\/p>\n

This is an illustrative yet robust implementation that you are free to copy and tailor to your specific needs.<\/p>\n

Once you have included the KinesisFirehoseDeliveryStreamHandler in your codebase, you\u2019re ready to add it to the loggers\u2019 configuration. Let\u2019s see how the previous dictionary configuration changes to introduce the new handler.<\/p>\n

config = {\r\n  \"version\": 1,\r\n  \"disable_existing_loggers\": False,\r\n  \"formatters\": {\r\n      \"standard\": {\r\n          \"format\": \"%(asctime)s %(name)s %(levelname)s %(message)s\",\r\n          \"datefmt\": \"%Y-%m-%dT%H:%M:%S%z\",\r\n      },\r\n      \"json\": {\r\n          \"format\": \"%(asctime)s %(name)s %(levelname)s %(message)s\",\r\n          \"datefmt\": \"%Y-%m-%dT%H:%M:%S%z\",\r\n          \"class\": \"pythonjsonlogger.jsonlogger.JsonFormatter\"\r\n      }\r\n  },\r\n  \"handlers\": {\r\n      \"standard\": {\r\n          \"class\": \"logging.StreamHandler\",\r\n          \"formatter\": \"json\"\r\n      },\r\n      \"kinesis\": {\r\n          \"class\": \"KinesisFirehoseDeliveryStreamHandler.KinesisFirehoseDeliveryStreamHandler\",\r\n          \"formatter\": \"json\"\r\n      }\r\n  },\r\n  \"loggers\": {\r\n      \"\": {\r\n          \"handlers\": [\"standard\", \"kinesis\"],\r\n          \"level\": logging.INFO\r\n      }\r\n  }\r\n}\r\n<\/pre>\n

To include the new custom handler to our configuration, it is enough to add a “kinesis” entry to the “handlers” dictionary and another one to the root logger\u2019s “handlers” array.<\/p>\n

In the “handlers” dictionary\u2019s “kinesis” entry we should specify the custom handler\u2019s class and the formatter used by the handler to format log records.<\/p>\n

By adding this entry to the root logger\u2019s “handlers” array, we are telling the root logger to write log records both in the console and in the Kinesis Data Firehose delivery stream.<\/p>\n

PS: the root logger is identified by “” in the “loggers” section.
\nThat\u2019s all with the Kinesis Data Firehose log data producer configuration. Let\u2019s focus on the infrastructure behind the put_record_batch API, the one used by the KinesisFirehoseDeliveryStreamHandler to stream log records to the Cloud.
\nBeyond Kinesis Data Firehose\u2019s put_record_batch API
\nThe architecture components needed to aggregate your application\u2019s log records and make them available and searchable from a centralized dashboard are the following:<\/p>\n

a Kinesis Data Firehose delivery stream;
\nan Amazon Elasticsearch Service cluster.<\/p>\n

To create a Kinesis Data Firehose delivery stream, we move to the AWS management console\u2019s Kinesis dashboard. From the left side menu, we select Data Firehose. Once selected, we should see a list of delivery streams present in a specific region of your AWS account. To set up a brand new delivery stream, we\u2019ll click on the Create delivery stream button in the top right corner of the page.
\n\"kinesis
\nIn the Create delivery stream wizard, we\u2019ll be asked to configure the delivery stream\u2019s source, transformation process, destination, and other settings like the permissions needed to Kinesis Data Firehose to load streaming data to the specified destinations.<\/p>\n

Since we\u2019re loading data directly from our logger through boto3 SDK, we have to choose Direct PUT or other sources as delivery stream\u2019s Source.
\n\"Firehose
\nWe\u2019ll leave \u201ctransform\u201d and \u201cconvert\u201d options disabled since they\u2019re not fundamental for the sake of this article.<\/p>\n

The third step of the wizard asks to specify the delivery stream\u2019s destinations. Assuming that we\u2019ve already created an Amazon Elasticsearch Service cluster in our AWS account, we set it as our primary destination, specifying the Elasticsearch Index name, rotation frequency, mapping type, and retry duration, i.e. how long a failed index request should be retried.
\n\"elasticsearch
\nAs a secondary destination of our delivery stream, we will set up an S3 bucket. As already mentioned before, this bucket will contain historical logs that are not subject to Elasticsearch index\u2019s rotation logic.
\n\"s3
\nWe will let S3 compression, S3 encryption, and error logging disabled and focus on the permissions. This last section requires us to specify or create a new IAM Role with a policy that allows Kinesis Data Firehose to stream data to the specified destinations. By clicking on Create new we\u2019ll be guided in the creation of an IAM Role with the required permissions policy set.
\nLog records streaming test
\nOnce the delivery stream is created, we can finally test if code and architecture were correctly integrated. The following scheme illustrates the actors in play:
\n\"scheme\"
\nFrom our local machine, we\u2019re going to simulate an App component that loads log data directly to a Kinesis Data Firehose delivery stream. For this test, we will use the config dictionary that already includes the KinesisFirehoseDeliveryStreamHandler.<\/p>\n

import logging.config\r\n\r\nconfig = {...}\r\n\r\nlogging.config.dictConfig(config)\r\nlogger = logging.getLogger(__name__)\r\n\r\n\r\ndef test():\r\n   try:\r\n       raise NameError(\"fake NameError\")\r\n   except NameError as e:\r\n       logger.error(e, exc_info=True)\r\n<\/pre>\n

Running this test, a new log record will be generated and written either in the console and in the delivery stream.
\nHere\u2019s the console output of the test:<\/p>\n

{\"asctime\": \"2020-05-11T14:44:44+0200\", \"name\": \"logging_test5\", \"levelname\": \"ERROR\", \"message\": \"fake NameError\", \"exc_info\": \"Traceback (most recent call last):\\n  File \\\"\/Users\/ericvilla\/Projects\/logging-test\/src\/logging_test5.py\\\", line 42, in test\\n    raise NameError(\\\"fake NameError\\\")\\nNameError: fake NameError\"}<\/pre>\n

Well, nothing new. What we expect in addition to the console output is to find the log record in our Kibana console too.<\/p>\n

To enable search and analysis of log records from our Kibana console, we need to create an Index pattern, used by Kibana to retrieve data from specific Elasticsearch Indexes.<\/p>\n

The name we gave to the Elasticsearch index is logging-test. Therefore, indexes will be stored as logging-test-. Basically, to make Kibana retrieve log records from each Index that starts with logging-test-, we should define the Index pattern logging-test-*. If our KinesisFirehoseDeliveryStreamHandler worked as expected, the Index pattern should match a new Index.
\n\"create
\nTo filter log records by time, we can use the asctime field that our JSON formatter added to the log record.
\n\"create
\nOnce the Index pattern is created, we can finally search and analyze our application\u2019s log records from the Kibana console!<\/p>\n

\"kibana
\nIt is possible to further customize log records search and analysis experience to debug your application more efficiently, adding filters, and creating dashboards.<\/p>\n

With all being said, this concludes our coverage of Python\u2019s logging module, best practices, and log aggregation techniques. We hope you\u2019ve enjoyed reading through all of this information and maybe learned a few tricks.
\nUntil the next article, stay safe \ud83d\ude42<\/p>\n

Read Part 1<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

In this second part of our journey (missed Part 1? Read it here!) covering the secrets and practices of Python\u2019s […]<\/p>\n","protected":false},"author":7,"featured_media":1446,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[478],"tags":[270,262,266,274],"yoast_head":"\nPart II: Python logging best practices and how to integrate with Kibana Dashboard through AWS Kinesis Stream and Amazon ElasticSearch Service - Proud2beCloud Blog<\/title>\n<meta name=\"description\" content=\"Part II: Python logging best practices. Managing multiple app instances to aggregate logs using Kinesis Stream, ElasticSearch and Kibana.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Part II: Python logging best practices.\" \/>\n<meta property=\"og:description\" content=\"Part II: Python logging best practices. Managing multiple app instances to aggregate logs using Kinesis Stream, ElasticSearch and Kibana.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/\" \/>\n<meta property=\"og:site_name\" content=\"Proud2beCloud Blog\" \/>\n<meta property=\"article:published_time\" content=\"2020-05-29T10:50:29+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-03-24T15:49:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.besharp.it\/wp-content\/uploads\/2020\/05\/copertine-blog-Recuperato-40.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1667\" \/>\n\t<meta property=\"og:image:height\" content=\"1251\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Eric Villa\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:title\" content=\"Part II: Python logging best practices.\" \/>\n<meta name=\"twitter:description\" content=\"Part II: Python logging best practices. Managing multiple app instances to aggregate logs using Kinesis Stream, ElasticSearch and Kibana.\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/blog.besharp.it\/wp-content\/uploads\/2020\/05\/copertine-blog-Recuperato-40.png\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eric Villa\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/\",\"url\":\"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/\",\"name\":\"Part II: Python logging best practices and how to integrate with Kibana Dashboard through AWS Kinesis Stream and Amazon ElasticSearch Service - Proud2beCloud Blog\",\"isPartOf\":{\"@id\":\"https:\/\/blog.besharp.it\/#website\"},\"datePublished\":\"2020-05-29T10:50:29+00:00\",\"dateModified\":\"2021-03-24T15:49:00+00:00\",\"author\":{\"@id\":\"https:\/\/blog.besharp.it\/#\/schema\/person\/2aae452eb3d76073c835d108b04c88e8\"},\"description\":\"Part II: Python logging best practices. Managing multiple app instances to aggregate logs using Kinesis Stream, ElasticSearch and Kibana.\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/blog.besharp.it\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Part II: Python logging best practices and how to integrate with Kibana Dashboard through AWS Kinesis Stream and Amazon ElasticSearch Service\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.besharp.it\/#website\",\"url\":\"https:\/\/blog.besharp.it\/\",\"name\":\"Proud2beCloud Blog\",\"description\":\"il blog di beSharp\",\"alternateName\":\"Proud2beCloud Blog\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.besharp.it\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.besharp.it\/#\/schema\/person\/2aae452eb3d76073c835d108b04c88e8\",\"name\":\"Eric Villa\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blog.besharp.it\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/666de14295bdf007c4c04f336a9e887a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/666de14295bdf007c4c04f336a9e887a?s=96&d=mm&r=g\",\"caption\":\"Eric Villa\"},\"description\":\"Senior DevOps Engineer @ beSharp. A coder who\u2019s an enthusiast about Cloud Computing and technology in general, especially when applied to motorsports and electronic music, my true loves. Serial overthinker; still don\u2019t know if it is good or bad. I\u2019m currently focused and committed to open source.\",\"url\":\"https:\/\/blog.besharp.it\/author\/eric-villa\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Part II: Python logging best practices and how to integrate with Kibana Dashboard through AWS Kinesis Stream and Amazon ElasticSearch Service - Proud2beCloud Blog","description":"Part II: Python logging best practices. Managing multiple app instances to aggregate logs using Kinesis Stream, ElasticSearch and Kibana.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/","og_locale":"en_US","og_type":"article","og_title":"Part II: Python logging best practices.","og_description":"Part II: Python logging best practices. Managing multiple app instances to aggregate logs using Kinesis Stream, ElasticSearch and Kibana.","og_url":"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/","og_site_name":"Proud2beCloud Blog","article_published_time":"2020-05-29T10:50:29+00:00","article_modified_time":"2021-03-24T15:49:00+00:00","og_image":[{"width":1667,"height":1251,"url":"https:\/\/blog.besharp.it\/wp-content\/uploads\/2020\/05\/copertine-blog-Recuperato-40.png","type":"image\/png"}],"author":"Eric Villa","twitter_card":"summary_large_image","twitter_title":"Part II: Python logging best practices.","twitter_description":"Part II: Python logging best practices. Managing multiple app instances to aggregate logs using Kinesis Stream, ElasticSearch and Kibana.","twitter_image":"https:\/\/blog.besharp.it\/wp-content\/uploads\/2020\/05\/copertine-blog-Recuperato-40.png","twitter_misc":{"Written by":"Eric Villa","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/","url":"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/","name":"Part II: Python logging best practices and how to integrate with Kibana Dashboard through AWS Kinesis Stream and Amazon ElasticSearch Service - Proud2beCloud Blog","isPartOf":{"@id":"https:\/\/blog.besharp.it\/#website"},"datePublished":"2020-05-29T10:50:29+00:00","dateModified":"2021-03-24T15:49:00+00:00","author":{"@id":"https:\/\/blog.besharp.it\/#\/schema\/person\/2aae452eb3d76073c835d108b04c88e8"},"description":"Part II: Python logging best practices. Managing multiple app instances to aggregate logs using Kinesis Stream, ElasticSearch and Kibana.","breadcrumb":{"@id":"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.besharp.it\/part-ii-python-logging-best-practices-and-how-to-integrate-with-kibana-dashboard-through-aws-kinesis-stream-and-amazon-elasticsearch-service\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blog.besharp.it\/"},{"@type":"ListItem","position":2,"name":"Part II: Python logging best practices and how to integrate with Kibana Dashboard through AWS Kinesis Stream and Amazon ElasticSearch Service"}]},{"@type":"WebSite","@id":"https:\/\/blog.besharp.it\/#website","url":"https:\/\/blog.besharp.it\/","name":"Proud2beCloud Blog","description":"il blog di beSharp","alternateName":"Proud2beCloud Blog","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.besharp.it\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/blog.besharp.it\/#\/schema\/person\/2aae452eb3d76073c835d108b04c88e8","name":"Eric Villa","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.besharp.it\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/666de14295bdf007c4c04f336a9e887a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/666de14295bdf007c4c04f336a9e887a?s=96&d=mm&r=g","caption":"Eric Villa"},"description":"Senior DevOps Engineer @ beSharp. A coder who\u2019s an enthusiast about Cloud Computing and technology in general, especially when applied to motorsports and electronic music, my true loves. Serial overthinker; still don\u2019t know if it is good or bad. I\u2019m currently focused and committed to open source.","url":"https:\/\/blog.besharp.it\/author\/eric-villa\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.besharp.it\/wp-json\/wp\/v2\/posts\/1418"}],"collection":[{"href":"https:\/\/blog.besharp.it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.besharp.it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.besharp.it\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.besharp.it\/wp-json\/wp\/v2\/comments?post=1418"}],"version-history":[{"count":0,"href":"https:\/\/blog.besharp.it\/wp-json\/wp\/v2\/posts\/1418\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.besharp.it\/wp-json\/wp\/v2\/media\/1446"}],"wp:attachment":[{"href":"https:\/\/blog.besharp.it\/wp-json\/wp\/v2\/media?parent=1418"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.besharp.it\/wp-json\/wp\/v2\/categories?post=1418"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.besharp.it\/wp-json\/wp\/v2\/tags?post=1418"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}