Blog Directory

Google Rolls Out Its Search App ‘Google Go’

Google Rolls Out Its Search App ‘Google Go’

Tech giant Google has come up with its flagship app, Google Go’, which is on had to users all over the globe. Earlier, the search app was only available in a handful of countries where most of the users use low-spec Android phones.

 

Optimized to run swiftly on low-spec Android phones, Google Go also performs highly effectively in places where wireless connections tend to be slow or uneven.

 

Well aware of the fact that a large number of people can also take advantage of a lightweight Google app, Google has decided to unveil the app to all Android users:

 

“Millions of people have already used Google Go to find information on the web and make sense of the world around them. But we know that people everywhere can sometimes struggle with spotty connections, phone storage, and reading or translating text.”

 

The app offers some cutting-edge features such as the capacity to read out content loud, besides using less memory, less storage, and working well with blotchy internet connections.

 

The app can also read web pages in as many as 28 languages, even on slower connections like 2G, besides reading content taken in pictures with Google Lens.

 

The feature uses artificial intelligence to ascertain which parts of a page to read and which to omit, obviating the need to read things like text ads in between paragraphs.

 

Just like a podcast, users can also listen to the audio on its own, or follow on the screen as words get underlined while they’re read out loud.

 

And whenever the connection gets disrupted, Google Go will remember your location and recover the last set of search results once the connection is restored.

 

The search app is available now on the Play Store for Android devices.

Saima Naz

Aug 21, 2019

How Deep Learning and Machine Learning are Different

How Deep Learning and Machine Learning are Different

Machine Learning and deep learning have gained a lot of popularity in recent years. Both are the subsets of artificial intelligence. Often, both the terms have been seen using interchangeably but both have a lot of differences. Examples of machine learning and deep learning are all around. It’s how Netflix understands which show you’re going to watch next, how Facebook recognizes whose face is in a picture, what makes self-driving cars a reality and a lot more.

So, let us start with the basic enlightenment of both the terms

 

What is Machine Learning and Deep Learning?

So, AI is an all-encompassing term that originally erupted, followed by ML that thrived later, and finally, DL that is promising to scale AI’s progress to another level.

Machine learning is the best tool to analyze, comprehend and recognize data patterns so far. The main idea behind ML is that a computer is trained in a way to automate duties that are exhausting or complex to humans. It takes decisions with minimal human interference. Machine learning incorporates data to deliver an algorithm which can comprehend the connection between input and output. When the machine has fulfilled the learning, the value or class of the new data point can be predicted.

On the other hand, deep learning uses different layers to learn from data. It is also often called machine learning as it copies the network of neurons in a brain.  The learning phase in deep learning is completed through a neural network which is an architecture in which layers are piled on top of each other.

 

Comparison Between Machine Learning and Deep Learning

Since you have understood the basic definition of machine learning and deep learning, let us now dig deep into the differences between both of them.

 

  • Data Dependencies

We will start off with the major difference between the two algorithms which is the ‘performance’. Deep learning algorithms require a huge amount of data to understand completely hence if the data is small DL algorithms do not work rightly. While in machine learning their handcrafted rules prevail in this setting.

 

  • Hardware Dependencies

Among both the algorithms, deep learning severely depends on high-end machines as compared to traditional machine learning which can swiftly work on low-end machines. Consequently, deep learning requirement comprises GPUs which is a fundamental part of its working.  Comparatively, Deep learning algorithms do a large amount of matrix multiplication operations. All of these operations can be proficiently optimized by means of a GPU because it is made for this purpose.

 

  • Feature Engineering

Feature Engineering is a method of placing domain knowledge into the development of feature extractors to lessen data complexity and make patterns more visible for learning algorithms to function. This method is tricky and costly in terms of time and expertise.  In machine learning, most of the characteristics used need to be recognized by the specialist and then hand-coded as per domain and data type. While deep learning algorithms try to learn high-level features from data. This is a very unique component of deep learning and a significant step forward in traditional machine learning. Deep learning thus minimizes the task of developing a new feature extractor for every problem.

 

  • Problem Solving Approach

The traditional machine learning algorithm is generally needed to break a problem into different parts to solve them individually and combine them to get the result. In contrast, deep learning backs to solve the problem end-to-end, such as logistic issues. Let’s further explain it with an example.

 

You are asked to detect multiple objects. In this task, we need to specify what the object is and where it is present in the picture. So, in the machine learning algorithm, this problem gets to divide into object detection and object recognition. First of all, we use the grabcut algorithm to skim through the picture and discover all possible items. Then, of all recognized objects, you would use an object recognition algorithm such as SVM with HOG to identify applicable objects. On the contrary, deep learning algorithm would perform the whole process end-to-end.

 

  • Execution Time

Deep Learning usually requires more time to train compared to machine learning. Because of the fact that there are so many parameters in a deep learning algorithm. On the other side, machine learning requires a lot less time to train, from a few seconds to a few hours.

 

But if we talk about testing time, the turn is completely opposite. The deep learning algorithm requires a lot less time to operate. Whereas, if you compare it with the nearest neighbors (a type of machine learning algorithm), the test time improves by increasing the size of the data.

 

  • Interpretability

Interpretability is the main factor deep learning is still thought 10 times before its use in industry. For instance, an automated marking of a test is done. The scoring done is accurate and quite human-like. But it does not explain why such marks or score were given. You can find out later which nodes of the neural network were activated, but we don’t understand what the neurons were intended to model and what these layers of neurons were doing together anyway. So, we fail to interpret the results.

 

While in machine learning algorithms such as decision trees offer us all the details we look for as to why it chose what has been marked. so, it is particularly simple to interpret the reasoning behind them. Therefore, algorithms such as decision trees and linear/logistic regression are mainly used for interpretability in the industry.

Saima Naz

Aug 21, 2019

Apple Allegedly Boosts TV Outlay by $5 Billion

Apple Allegedly Boosts TV Outlay by $5 Billion

A new report by the Financial Times alleges that tech giant Apple has committed a staggering $5 billion dollars more to its original video content budget in an effort to effectively vie with Amazon, Disney, HBO, Netflix, and Hulu.

 

The company had initially earmarked $1 billion for former Sony Pictures Television officials Jamie Erlicht and Zack Van Amburg to invite renowned creators and Hollywood stars to its platform. According to the publication, that number has risen to $6 billion as more shows have moved through production and budgets have swollen.

 

The FT says that one production has cost Apple hundreds of millions of dollars, while separately Apple is reported to be spending $300 million on just the first two seasons of the show.

 

Apple’s inclination to instantly match what Netflix was spending yearly on original content only a few years ago shows how intense the streaming wars are set to become in the coming months and years.

 

Apple’s TV Plus service mounts this autumn, secured by a set of other programming with big names like Oprah Winfrey and Steven Spielberg. The company’s services chief Eddy Cue has said the tech behemoth plans to add new content at a slower pace than its soon-to-be competitors, with a prioritization on quality over quantity.

 

Nevertheless, Apple will be going up against not just current streaming titans, but also novices like Disney. In 2020, there will also be WarnerMedia’s new HBO Max to deal with, a new streaming service that is likely to mix live TV, including news and sports, and a wider variety of content from across every WarnerMedia property with all of HBO’s current offerings.

 

In the meantime, Amazon, Disney, and Netflix are spending staggering amounts of money to vie with one another.

Saima Naz

Aug 20, 2019

How Artificial Intelligence can Learn Human Behavior

How Artificial Intelligence can Learn Human Behavior

Human loves their unpredictable nature and love working on things which we want to work without anyone’s interference and imposed behaviors. It is a complicated behavior but after the fourth industrial revolution and with the emergence of Artificial Intelligence human become more predictable.

 

 

How AI has impacted on human’s life and how it is able to learn human behaviors? Let first understand what AI is.

The best and the most common example when talking about AI is self-driving cars. Artificial means simply creating something by a human, the real thing to know is that what do you mean by intelligence? It is the ability to acquire knowledge and the problem-solving skills required to solve different problems.

 

 

Basically, AI is the simulation of human intelligence which is processed by machines. These processes include the learning what information we have and what rules are needed to solve the problem, reasoning which gives definite approximated results and then correction after getting the results.

After understanding AI let’s get to know a little about its types to make our understandings better.

 

 

Types of AI

An assistant professor of Michigan State University Arend Hintze proposed and categorized AI into 4 types.

 

Type 1: Reactive machines

A chess program made by IBM called Deep Blue is an example of Reactive machines. The program analyses and identify the pieces on the board and from the locations it makes predictions. A sad part is there is no memory to use past experiences which are necessary for future predictions. It analyses what moves can be possible by both sides of the teams and then chooses to take a step with making a strategy.

 

Type 2: Limited Memory

In this type, AI systems work on past experiences to make future predictions. A self-driving car is an example lies here. Through observations, the near future actions are performed. These actions will be removed from memory after some time.

 

Type 3: Theory of mind

Theory of mind refers to the human ability to aspire, his faith and his intentions which are giving him results with the decisions he makes. There is no working done on this type of AI.

 

Type 4: Self-awareness

In this type, AI has its own senses to observe and work accordingly. Systems are self-aware that what others are feeling and how to use this information. There is no working done on this type of AI.

 

How AI Actually Helps in Learning Human Behaviour

In past eras, human behavior was unpredictable because there was no such thing to analyze and track how human take a decision and what process they make to take decisions. Today, the advanced systems and the ability of deep learning which stores a huge amount of data we are now able to parse the data to learn the patterns which can be operated by people.

 

A seller ask himself a question: How I can sell my products more? What you think the answer would be? Yes, you are right if you are thinking it’s the data. However, the main point is how you use the data to make it beneficial for you!

 

The most selling product in market is food. We are always thinking about what our next meal should be. There we make a decision by going out or eating at home, or by choosing a meal type. It is difficult in predicting what next meal we are going to eat. If I am having a list of meals that tells what I’ve eaten for the last four to six months maybe, I am able to predict my next meal by those patterns of meals. But the result may not be accurate because my previous meals are not the only thing that helps me in predicting my future meal. There are a lot of different factors on which my next meal is dependent like what kind of breakfast I had, if I need a heavy lunch or a lighter one or what exactly I am craving for and there will be a lot more reasons which can make my predictions inaccurate.

 

The collection would take years of working to find a pattern which can predict with accuracy.

 

In the above scenario, I am just talking about my data but what a seller really needs? He needs data of millions of people in order to understand what they should offer. AI is making it easy by collecting data and human behaviors on a vast scale so that executives can understand what strategy they should achieve to gather the attention of people.

 

This complicated behavior of human is having logics but sometimes, it acts as illogically. It comes when we are in emotions which are not understandable at all. We are not able to get why and for what reason we are having such emotions and why we are performing some particular action. We can say that our culture, our thinking mechanism and what we perceive from our surroundings is a big reason behind this behavior. So, it means psychology is having a vital role in human nature. Of course, it has! Just think about how much data we have to store to understand all such behaviors and what would human do if it is his job to store all that data. The result will only be errors and wrong patterns. Here again, the master, the AI is giving results with accuracy. There are tools created which can sense the stress and anxiety people have which helps in controlling and understanding one’s behavior. Moreover, other AI tools are helping human in physical conditions and daily routines to maintain a happy and healthy life.

 

It may grab your attention that AI can end up being powerful when utilized as a showcasing tool. In 2019, we are managing genuine restrictions where it turns out to be quite difficult to distinguish how far communication can follow human behavior. Here predictive modeling is giving us insights and rapid results in behavioral changes. A model of AI learns and know how to interrupt and what actions to perform that are best suited to a particular person.

Saima Naz

Aug 17, 2019

Podcast mobile app usage surges to 60% since January 2018

Podcast mobile app usage surges to 60% since January 20...

As per the study done by Adobe Analytics, usage of podcast mobile app has increased 60% since January 2018.

The study says that the usage of the app is likely to grow even further as 45% of listeners plan to tune into more podcasts in the future.

The study, which consisted of 193 million monthly exclusive visitors to U.S. mobile apps, discovered that 41% of podcast finding happens through online sources such as blogs and articles.

 

According to the report, almost 72% of respondents notice podcast quality to be on the increase, with just 6% think that quality is declining. Nearly 52% of respondents said they listen to podcasts while working or traveling.

According to the study, education, history, and documentary were found to be the four most popular genres, while video games were among the least popular.

 

The majority—60%—of listeners said that only after hearing it on a podcast did they look up a service or product, with 25% reporting that they ended up making the purchase. Nevertheless, 58% of respondents said they avoided podcast ads.

 

The podcast sector’s impetus raises the feasibility of podcast advertising to reach a mounting audience. And, since more data is on hand from diverse platforms like Spotify, advertisers have more targeting skills to help them get closer to the listeners that are almost certain to be interested in their products or services.

 

Saima Naz

Aug 16, 2019

Top 10 Machine Learning Tools in 2019

Top 10 Machine Learning Tools in 2019

Machine Learning which leads all other jobs among technology world. Data Science and Machine Learning are playing a boss role for all other technologies. Be it restorative or building it is adaptable to serve all sort of fields. To deal with Data Science, we should have the top Machine Learning tools, software, and frameworks to work on. These tools should give the outcomes which lead to 100% accuracy as we have an immense measure of training data which may be totally messy (in most cases). What’s more? These tools give the best results when we build well-defined software for them. However today, a machine can act by itself and is working well in this environment. This fruitful change is only achievable by making machine learning tools much effective. There are a lot of machine learning tools for beginners. The absolute best machine learning software and tools to learn in 2019 are mentioned below irrespective of any ranking:

 

1. Google Cloud ML Engine

Data, if look closely it has detailing in it and if talking about individual’s data all around the world then you have millions and billions of training data. Your PC will not be able to execute this much long listing or even your algorithm doesn’t work here. What will you do now? Here comes a tool named as Google Cloud ML Engine. This tool can let your data train the way you want. Data scientists run their high-quality machine learning models using this tool.

 

Key Features:
  • Gives ML model building, predictive modelling, training, and deep learning.
  • Prediction and training can be used in both ways either jointly or independently.
  • Google Cloud ML Engine is utilized by the ventures to work on client’s email or messages or to detect the clouds you have in the image captured by satellites.
  • It very well may be utilized to prepare an unpredictable and complex model.

2. Amazon Machine Learning (AML)

A cloud-based machine learning software which is strong and easy to use by the developers working on different levels. To generate predictions and building high-quality machine learning models, developers are using this tool. Data integration is done by coordinating different sources like Amazon S3 and more.

Key Features:
  • The highlighted features AML provides are the visualization tools and wizards.
  • Binary classification, multi-class classification, and regression models are utilized.
  • MySQL database utilization to create data source object.
  • Amazon Redshift is another source provided where data is stored from which you can create data source object.
  • Evaluating the concepts of Data sources, ML models, Evaluations, Batch predictions, and Real-time predictions.

3. Apache Mahout

A simple framework works on linear algebra and expressed in Scala DSL. A free and open source undertaking of the Apache Software Foundation. This system works on the objective of implementing the algorithm rapidly for information researchers, mathematicians, and analysts.

Key Features:
  • Helps in building adaptable algorithms by the framework which is designed to allow new capabilities and functionality.
  • Perform machine learning techniques, some of them are clustering, classification, and recommendation.
  • It incorporates matrix and vector libraries which helps in handling the data easily.
  • It used the paradigm of MapReduce and Apache Hadoop.

4. Accord.NET

Another machine learning framework which work on both audio and image processing libraries written in C# is Accord.NET. It comprises of different libraries having a wide scope of utilization incorporating linear algebra, pattern recognition, and statistical data processing.

Accord.Math, Accord.Statistics, and Accord.MachineLearning are the libraries working for Acoord.NET.

Key Features:
  • Accord.NET helps in multiple departments like signal processing, computer vision, computer audition and statistics applications.
  • Comprises of in excess of 40 parametric and non-parametric estimation of statistical distributions.
  • Having hypothesis testing ability which includes more than 35 tests like non-parametric tests, one way and two-way ANOVA tests, and more.
  • Having more than 38 Kernal functions.

5. Shogun

Shogun, an open source machine learning library created in 1999 by Soeren Sonnenburg and Gunnar Raetsch. A C++ written tool which solves machine learning problems by the algorithms and data structures it provides. It bolsters numerous languages like Python, R, Octave, Java, C#, Ruby, Lua, and so on.

Key Features:
  • The development of this tool is done for large scale learning.
  • It centers around Kernal machines like support vector machines for classification and regression problem.
  • It helps in connecting other machine learning libraries like LibSVM, LibLinear, SVMLight, LibOCAS, and more.
  • Python, Lua, Octave, Java, C#, Ruby, MatLab, and R can use the interface of this tool.
  • Data that can be processed on this tool can be in million samples.

6. Oryx 2

An acknowledgment of the lambda engineering. Oryx 2 is based on Apache Spark and Apache Kafka. It is utilized for ongoing enormous machine learning works. Oryx 2 works for developing applications like end-to-end applications for filtering, classification, regression, and clustering. Oryx 2.8.0 is the latest version running in the market.

Key Features:
  • Updated version of Oryx 1 with more advancements to lead the data with accuracy.

Tiers:

  • Generic lambda architecture tier
  • Specialization on top providing ML abstractions
  • End-to-end implementation of the same standard ML algorithms

Layers:

  • Batch layer
  • Speed layer
  • Serving layer

Also a data transport layer is there for transferring the data among layers to receive the input from external sources.

7. Apache Singa

In 2014, Apache Singa came into existence by DB System Group at the National University of Singapore in a joint effort with the database group of Zhejiang University. This software used for Natural Language Processing (NLP) and image recognition and bolster a wide scope of mainstream deep learning models. This software comprises three fundamental parts: Core, IO, and Model.

Key Features:
  • Adaptable engineering for versatile distributive training
  • Tensor abstraction is taken into account further developed machine learning models
  • This device incorporates upgraded IO classes for composing, learning, encoding and decoding documents and information
  • This device incorporates upgraded IO classes for composing, learning, encoding and decoding documents and information
  • Device essential featuring is supported for running on hardware devices

8. Google ML Kit for Mobile

It is safe to say that you are a versatile developer? No worries. Google’s Android Team brings an ML KIT for you which bundles up the machine learning aptitude and innovation to grow progressively robust, customized, and advanced applications to keep running on a gadget. You can utilize this instrument for face and image recognition, image labelling, locating places and standardized identification examining applications.

Key Features:
  • It offers ground-breaking advancements
  • Run on a specific device or in the devices connected with cloud system
  • Works on the solutions by making custom models

9. Apple’s Core ML

To integrate machine learning models into your app another machine learning framework is playing a role known as Apple’s Core ML. Use your ML model by dropping it’s file into your project and the Objective-C or Swift wrapper class consequently is formed by the Xcode.

This software provides maximum execution and extreme performance on every processor you are using (CPU and GPU).

Key Features:
  • Goes about as an establishment for the systems and functions that are domain-specific
  • On-device performance is optimized at maximum level
  • It is easy to understand by low-level natives

10.  TensorFlow

TensorFlow is another big name known by every machine learning expert. Created by Google for building ML models on its open-source platform. It has an adaptable plan of libraries and assets that enable developers to create applications.

Key Features:
  • A listing of deep learning mechanism
  • Exceedingly adaptable open-source software
  • Using high-level APIs to train and execute the ML models
  • Making numerical calculations easy by data flow graphs

Saima Naz

Aug 15, 2019

Contribution of IT Sector in the Growth of Pakistan

Contribution of IT Sector in the Growth of Pakistan

Over the past 71 years, Pakistan’s growth in IT sector, when considering overall, is first disappointing but from the past 10 to 15 years marked as splendid! The industries we got are 34 in total out of more than 900 industries and in those industries the number of IT industries are minimal. After time passed and Pakistan stood up, it started using the resources for the rapid development and growth of the IT industry.

 

There is no doubt that Pakistan has talented and enthusiastic generations from the time when it was declared a separate state till now which is the reason for the spectacular emergence and economic growth. The starting years were very difficult for all industries including IT where no such achievements were made and there was no one who really had an eye on it because of other challenges Pakistan was facing like political instability, energy deficit and lack of promotions.

 

In a nutshell, the era from the time when Pakistan was made till 1990 was not in the favor of Pakistan with respect to IT industry but after that time, the Information Technology (IT) sector of Pakistan is bolstering and growing with a remarkable pace whether talk about a local market or export services. According to the recent survey, the overall business has crossed 3.3 billion in the year 2018 and 2.8 billion in the duration of 2016-2017 as per the record of Pakistan Software Export Board (PSEB).

 

Other industries need heavy machinery, infrastructure, and tools while the IT industry needs no such giant contributions which turn to be a plus point for Pakistan. Information Technology only requires people who can innovate and adapt the changes dynamically and Pakistan is blessed with such geniuses. Now Pakistan’s IT industry is emerging internationally and getting the coverage of enormous inventions and solutions made by Pakistani people. More than one hundred thousand employments are official, and many other people are informally employed in this field. It is our bad that IT industry didn’t get the same industry status as of textile and other industries have got. If the Government takes necessary actions in promoting and providing education the IT industry need, then surely, we can make a ground-breaking effect on GDP and foreign direct investment.

 

Over the last four years, the growth rate is 97 percent approximately in the field of IT. Information and Communication Technologies (ICTs) grasp global importance and is empowering economics which helps in triggering the growth of all other sectors.

 

Contribution of Government of Pakistan in the IT industry

In the year 2000, the first IT policy and implementation strategy was approved that became a pillar of founding new industries and developing new technologies. In 2002, a training and teaching program was initiated for the teachers in Pakistan led by Intel and requested by Prof. Atta-ur-Rahman that gave us 220,000 trained teachers from almost 70 districts without a need of spending a penny by the government. There is no doubt in saying that Prof. Atta-ur-Rahman is the reason for rising in the IT industry of Pakistan. He was the one serving this industry from head to toe for the development which leads to introducing reforms and increase the research productivity in Pakistan. Since then, Pakistan got the highest increase rate of highly cited papers in comparison to other big countries.

 

In 2001, it was reported that Pakistan has more than 20 million internet users and it was the highest rate recorded in the countries registered as a high growth rate in internet penetration.

 

In the duration of 2003-2005, Pakistan’s IT exports got a rising level of 50 percent increment and the total amount was about 48.5 million USD. In the year 2012-2013 GOP decided to spend 4.6 billion on the IT projects and introduced the e-government, infrastructure and human resource development.

 

During the last decade, IT sector got incentives from the government of Pakistan to the establishment and development of new industries in this sector. The duration of 2013-2015 was the ground-breaking era of revolution in the IT industry as 3g/4g technologies are launched.

 

The launch of computerized e-government systems creating effective progress in all major like departments like law enforcement agencies, police and district administrations. The National Database Registration and Authority (NADRA) has also a computerized system that helps the organization in keeping correct information and issuing important documents. Civil services and other government department are improved by introducing such a system that is making critical working easier.

 

UN published a study of “Economic and Social Commission for Asia and the Pacific” (ESCAP), in which the UN mentioned Pakistan as a highly emerging country after the introduction of e-commerce and e-governance.

 

After the government initiatives and introduction of new policies in the IT sector, software development start growing rapidly which eventually become the cause of increment in export services. People are educating themselves and hired by renowned companies which are developing more services and innovating new businesses in Pakistan. Big industries like textiles, pharmaceuticals, food and beverages and more are now adapting the software services to work with more accuracy and increased development. Mobile application and game development are another great achievement of Pakistan that fascinates the young generation and they are enthusiastic in learning and developing new games and applications. They are now getting international fame by making innovative applications that help in solving critical global problems. Educational institutes are now offering diploma courses and other short courses in software development for the young generation who are surely going to make this industry a topper.

 

Above discussion testifies that Pakistan has massive potential growth in the IT industry. There is a requirement for huge improvement and advancement to compete with other developing nations. With clear direction and strategic planning, Pakistan can be among the most advanced nations with respect to IT industry and also helps in the alleviation of poverty. Now is the time for Pakistan to make it or break it as in the Fourth Industrial revolution the world is at the same scale in IT field and the one who struggles the most will be going to rule in this fourth IR.

Saima Naz

Aug 14, 2019

Indexing issues keep Google Search from Showing New Content

Indexing issues keep Google Search from Showing New Con...

Some new content through the Web is not appearing in search results as Google is confronting problems related to index today. For instance, a quick glance at the Top Stories in Google News reveals that some of the stories are from the last few hours, but many are from yesterday. The issues were first reported by Search Engine Land.

 

On Thursday, the company established that it was reviewing reports of indexing problems, later adding that it was seeing “issues in the URL Inspection tool within Search Console.”

 

According to Google, the URL problem was rectified as of 11:41AM ET, but no update has been provided on indexing. Google announced that it is working to fix the issue.

 

To find new text to display users, Google is continually crawling the internet. When Google undergoes issues with indexing, like it does today, search results won’t be as accurate as users expect from tech behemoth.

 

Of late, Google has faced a slew of indexing issues, including problems that lasted for a week in April and more than three days in May.

 

Saima Naz

Aug 9, 2019