Introduction to Web Scraping

What is Web Scraping?

Web Scraping is the process of data extraction from various websites.

DIFFERENT LIBRARY/FRAMEWORK FOR SCRAPING:

Scrapy:- If you are dealing with complex Scraping operation that requires enormous speed and low power consumption, then **Scrapy **would be a great choice. 

Beautiful Soup:- If you’re new to programming and want to work with web scraping projects, you should go for**_ Beautiful Soup_**. You can easily learn it and able to perform the operations very quickly, up to a certain level of complexity. 

Selenium:- When you are dealing with Core JavaScript-based web applications and want to make browser automation with AJAX/PJAX requests, then **Selenium **is a great choice. 

CHALLENGES WHILE SCRAPING DATA:

Pattern Changes:

Problem: Each website periodically changes its UI every now and then. Scrapers usually need modification every week to keep up with the changes, or else it will give you an incomplete data or crash the scraper.
Solution:  You can write test cases for parsing and extraction logic and run the tests regularly. You can also use any other continuous integration tool to catch failures

Anti- Scraping Technologies:

Problem: Some websites are using anti-scraping technologies, for instance, LinkedIn. If you’re hitting a particular website from the same IP address, then there are high chances for the target website to block your IP address.
Solution: Proxy services with rotating IP Addresses help in this regard. Proxy servers help mask IP addresses and can improve crawling speed. Scraping frameworks like Scrapy provides easy integration for several rotating proxy services.

Javascript-based Dynamic Content:

Problem: Websites that heavily rely on Javascript & AJAX to render dynamic content makes data extraction difficult. Because Scrapy and related frameworks/libraries will only work or extract what it finds in the HTML document, Ajax calls or Javascript are executed at runtime so it can’t scrape that.
Solution: This can be handled by rendering the web page in a headless browser like headless Chrome, which essentially allows running Chrome in a server environment. Another alternative is we can use selenium for javascript pages.

Quality of Data:

Problem: The records which do not meet the quality guidelines will affect the overall integrity of the data. Making sure the data meets quality guidelines while crawling because it needs to be performed in real-time and faulty data can cause serious problems.
Solution: One thing you can do for this is to write test cases. You can make sure whatever your spiders are extracting is correct, and they are not scraping any wrong structure data.

Captchas:

Problem: Captchas serve a great purpose in keeping spam away. However, they also pose a great deal of accessibility challenge for the web crawling bots out there. When captchas are present on a page from where you need to scrape data, basic web scraping setups will fail and cannot get past this barrier.
Solution: For this, you need a middleware that can take captcha, solve it, and return the response.

Maintaining Deployment:

Problem: If you’re scraping millions of websites, you can imagine the size of the code. It’s even very hard to execute spiders.
Solution: You can Dockerize your spiders and run them in an interval.

Scraping Guidelines/Best Practices:

Robots.txt file: Robots.txt is a text file webmasters create to instruct robots (typically search engine robots) how to crawl & index pages on their website. So this file generally contains instruction for crawlers. Robots.txt should be the first thing to check when you are planning to scrape a website. Every website would have set some rules on how bots/spiders should interact with the site in their robots.txt file.

Do not hit the servers too frequently: Web servers are not fail-proof. Any web server will slow down or crash if the load on it exceeds a certain limit, up to which it can handle. Sending multiple requests too frequently can result in the website’s server going down or the site becoming too slow to load.

User Agent Rotation: A User-Agent in the request helps identify which browser is being used, what version, and on which operating system. Every request made from a web browser contains a user-agent header, and using the same user-agent consistently leads to the detection of a bot. User Agent rotation is the best solution for this.

Do not follow the same crawling pattern: Only robots follow the same crawling pattern. Programmed bots follow a logic that is usually very specific. Sites that have intelligent anti-crawling mechanisms can easily detect spiders.

Scrapy Vs. BeautifulSoup

In this section, you will have an overview of one of the most popularly used web scraping tool called BeautifulSoup, and its comparison to Scrapy, python’s most used scraping framework.

Functionality:

Scrapy: Scrapy is the complete package for downloading web pages, processing them, and saving it in files and databases.
BeautifulSoup: BeautifulSoup is an HTML and XML parser and requires additional libraries such as requests,urlib2 to open URLs and store the result.

Learning Curve :

Scrapy: Scrapy is a powerhouse for web scraping and offers a lot of ways to scrape a web page. It requires more time to learn and understand how Scrapy works, but once mastered, it becomes easier to make web crawlers and run them by just writing one line of command.
BeautifulSoup: BeautifulSoup is relatively easy to understand for newbies in programming and get smaller tasks done in no time.

Speed and Load :

Scrapy: Scrapy can get big jobs done very easily. It can crawl a group of URLs in not more than a minute, depending on the size of the group and does it very smoothly.
BeautifulSoup: BeautifulSoup is used for simple scraping jobs with efficiency. However, it is slower than the Scrapy.

Extending Functionality:

Scrapy: Scrapy provides item pipelines that allow you to write functions in your spider that can process your data, such as validating data, removing data, and saving data to a database.
BeautifulSoup: BeautifulSoup is good for smaller jobs, but if you require much customization such as proxies, managing cookies, and data pipelines, Scrapy is the best option.

For this blog, we are going to explain the Scrapy framework as it has more usecases in real-time scraping problems.

Scrapy: Scrapy is a fast high-level web crawling and web scraping framework used to crawl websites and extract structured data from their pages.

Key Features of Scrapy are —

  1. Scrapy has built-in support for extracting data from HTML sources using XPath expression and CSS expression.
  2. It is a portable library, i.e. (written in Python and runs on Linux, Windows, and Mac)
  3. It can be**_ _Easily Extensible.**
  4. It is faster than other existing scraping libraries. It can extract the websites 20 times faster than any other tool.
  5. It consumes a lot less memory and CPU usage.
  6. It can help us to build a robust and flexible application with a bunch of functions.
  7. It has excellent community support for the developers, but the documentation is not that great for the beginners because it does not have beginner-friendly content.

I have done many web scraping projects in excellence technologies using the Scrapy framework.

Let’s start with Scrapy framework.

Before we start installing Scrapy, make sure you have python and pip set up in your system.

Using Pip: Just run this simple command.

pip install Scrapy

So, we’ll assume that Scrapy is already installed on your system. If still, you are getting an error, you can follow the official installation guide.

In starting, we will walk you through with these tasks:

  1. Creating a new Scrapy project.
  2. Writing a spider to crawl a site and extract data.

First, we will create a project using this command.

scrapy startproject tutorial

This will create a **tutorial **directory. Next, we will be moving into tutorial/spiders with the help of this dir and create a file quotes_spider.py

import scrapy

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)
    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)
        self.log('Saved file %s' % filename)

name: Here, the name identifies the Spider. It must be unique within a project, that is, you can’t set the same name for different Spiders. We will start crawling parse() a method that will be called to handle the response downloaded for each of the requests made.
yield: Here yield working as a return.

Now we will run our first spider. First, we will go to spiders dir then run this command with the name.

scrapy crawl quotes

This command runs the spider with the name quotes that we’ve just added, that will send some requests for the domains. You will get an output similar to this:

Now, check the files in the current directory. You should notice that two new files have been created: quotes-1.html and quotes-2.html, with the content for the respective URLs, as our parse method instructs.

This is a Basic spider which we discuss above. Now we have a basic idea of how the Scrapy framework will work. Let’s discuss some important basic things.

Extracting data: We can use the CSS selector and Xpath selector for extracting data from webpages.
The best way to learn how to extract data with Scrapy is trying selectors by using the Scrapy shell.

scrapy shell 'http://quotes.toscrape.com/page/1/'

Now we will see some examples of extracting data with a selector using Scrapy shell.

CSS selector: syntax - response.css(’ ‘)

>>> response.css('title')
[<Selector xpath='descendant-or-self::title' data='<title>Quotes to Scrape</title>'>]

The result of running response.css (‘title’) is a list-like object called SelectorList, which represents a list of selector objects that wrap around XML/HTML elements and allow you to run further queries to fine-grain the selection or extract the data.

To extract the text from the title above, you can use this:

>>> response.css('title::text').getall()
['Quotes to Scrape']

There are two things to note here: One is that we’ve added:: text to the CSS query, to mean we want to select only the text elements directly inside </strong> element. If we don’t specify, we will get the full title element, including its tags:</p> <pre class="wp-block-preformatted"><strong>>>> </strong>response.css('title').getall() ['<title>Quotes to Scrape</title>']</pre> <p>The other thing is that the result of calling <strong>.getall()</strong> is a list: a selector may return more than one result, so we extract them all. When you know you just want the first result, as in this case, you can use this:</p> <pre class="wp-block-preformatted"><strong>>>> </strong>response.css('title::text').get() 'Quotes to Scrape'</pre> <p>Also, we can use this alternative.</p> <pre class="wp-block-preformatted"><strong>>>> </strong>response.css('title::text')[0].get() 'Quotes to Scrape'</pre> <p><strong>XPath:</strong> we can also extract data from webpages using XPath.</p> <p>Now we will do the same thing by using <strong>XPath,</strong> which we have already done using <strong>CSS selectors.</strong> We are assuming that our website HTML code is similar to the below code.</p> <pre class="wp-block-preformatted"><<strong>html</strong>> <<strong>head</strong>> <<strong>base</strong> href='http://example.com/' /> <<strong>title</strong>>Example website</<strong>title</strong>> </<strong>head</strong>> <<strong>body</strong>> <<strong>div</strong> id='images'> <<strong>a</strong> href='image1.html'>Name: My image 1 <<strong>br</strong> /><<strong>img</strong> src='image1_thumb.jpg' /></<strong>a</strong>> <<strong>a</strong> href='image2.html'>Name: My image 2 <<strong>br</strong> /><<strong>img</strong> src='image2_thumb.jpg' /></<strong>a</strong>> <<strong>a</strong> href='image3.html'>Name: My image 3 <<strong>br</strong> /><<strong>img</strong> src='image3_thumb.jpg' /></<strong>a</strong>> <<strong>a</strong> href='image4.html'>Name: My image 4 <<strong>br</strong> /><<strong>img</strong> src='image4_thumb.jpg' /></<strong>a</strong>> <<strong>a</strong> href='image5.html'>Name: My image 5 <<strong>br</strong> /><<strong>img</strong> src='image5_thumb.jpg' /></<strong>a</strong>> </<strong>div</strong>> </<strong>body</strong>> </<strong>html</strong>></pre> <p>So, by looking at the HTML code of that page, let’s construct an XPath for selecting the text inside the title tag:</p> <pre class="wp-block-preformatted"><strong>>>> </strong>response.xpath('//title/text()') [<Selector xpath='//title/text()' data='Example website'>]</pre> <p>Yes, we got the title by this XPath, but this is not the proper text. We will try to extract proper text from the title tag using the same <strong>.get()</strong> and <strong>.getall()</strong>, which we have used with a CSS selector.</p> <pre class="wp-block-preformatted"><strong>>>> </strong>response.xpath('//title/text()').getall() ['Example website'] <strong>>>> </strong>response.xpath('//title/text()').get() 'Example website'</pre> <p>See, we got an only text from the title tag using the Xpath selector.<br> As you can see, <strong>.xpath()</strong> and <strong>.css()</strong> methods always return a <strong>SelectorList</strong> instance, which is a list of new selectors.</p> <p>If you want to extract only the first matched element, you can call the selector**.get()** </p> <pre class="wp-block-preformatted"><strong>>>> </strong>response.xpath('//div[@id="images"]/a/text()').get() 'Name: My image 1 '</pre> <p class="has-text-align-left"> We can also check if any tag has no data or set a default value if data is not there in a given selector. </p> <pre class="wp-block-preformatted"><strong>>>> </strong>response.xpath('//div[@id="not-exists"]/text()').get() <strong>is</strong> None True <strong>>>> </strong>response.xpath('//div[@id="not-exists"]/text()').get(default='not-found') 'not-found' </pre> <p>A default return value can be provided as an argument, to be used instead of <strong>None.</strong></p> <p>Now we’re going to get the base URL and some image links from example HTML code:</p> <pre class="wp-block-preformatted"><strong>>>> </strong>response.xpath('//base/@href').get() 'http://example.com/' <strong>>>> </strong>response.css('base::attr(href)').get() 'http://example.com/' <strong>>>> </strong>response.css('base').attrib['href'] 'http://example.com/'</pre> <p>We can see that we got a link by using all of the three methods. So we can use any CSS or XPath selector for extracting data from tags.</p> <p>CSS, Scrapy selectors also support using XPath expressions: XPath expressions are very powerful and are the foundation of Scrapy selectors. In fact, CSS selectors are converted to XPath under-the-hood.</p> <p>Now, let’s extract <strong>t****ext</strong>, **author, **and the <strong>tags</strong> from that <strong>‘<a href="http://quotes.toscrape.com">http://quotes.toscrape.com</a>’</strong> using the <strong>quote</strong> object we just created:</p> <pre class="wp-block-preformatted"><strong>>>> </strong>text = quote.css("span.text::text").get() <strong>>>> </strong>text '“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”' <strong>>>> </strong>author = quote.css("small.author::text").get() <strong>>>> </strong>author 'Albert Einstein'</pre> <p>Given that the tags are a list of strings, we can use the <strong>.getall()</strong> method to get all.</p> <pre class="wp-block-preformatted"><strong>>>> </strong>tags = quote.css("div.tags a.tag::text").getall() <strong>>>> </strong>tags ['change', 'deep-thoughts', 'thinking', 'world']</pre> <p>Having figured out how to extract each bit, we can now iterate over all the quotes elements and put them together into a Python dictionary.</p> <p><strong>Storing the scraped data</strong>:-We can use a simple command to store data into JSON format.</p> <pre class="wp-block-preformatted">scrapy crawl quotes -o quotes.json</pre> <p>That will generate a quotes.json file containing all scraped items, serialized in JSON.</p> <p><strong>Following links:</strong> We can scrap links from webpages using attr( ).</p> <pre class="wp-block-preformatted"><strong>>>> </strong>response.css('li.next a').get() '<a href="/page/2/">Next <span aria-hidden="true">→</span></a>'</pre> <p>This gets the anchor element, but we want the attribute <em>href</em> for this purpose Scrapy supports a CSS extension that lets you select the attribute contents, like this:</p> <pre class="wp-block-preformatted"><strong>>>> </strong>response.css('li.next a::attr(href)').get() '/page/2/'</pre> <p>There is also an <strong>attrib</strong> property available:</p> <pre class="wp-block-preformatted"><strong>>>> </strong>response.css('li.next a').attrib['href'] '/page/2'</pre> <p>Let’s see now our spider modified to recursively follow the link to the next page by extracting data from it.</p> <pre class="wp-block-preformatted"><strong>import</strong> <strong>scrapy</strong> <strong>class</strong> <strong>QuotesSpider</strong>(scrapy.Spider): name = "quotes" start_urls = [ 'http://quotes.toscrape.com/page/1/', ] <strong>def</strong> parse(self, response): <strong>for</strong> quote <strong>in</strong> response.css('div.quote'): <strong>yield</strong> { 'text': quote.css('span.text::text').get(), 'author': quote.css('small.author::text').get(), 'tags': quote.css('div.tags a.tag::text').getall(), } next_page = response.css('li.next a::attr(href)').get() <strong>if</strong> next_page <strong>is</strong> <strong>not</strong> <strong>None</strong>: next_page = response.urljoin(next_page) <strong>yield</strong> scrapy.Request(next_page, callback=self.parse)</pre> <p>After extracting the data, the <strong>parse()</strong> method looks for the link to the next page, builds a full absolute URL using the urljoin(), and yields a new request to the next page.</p> <p><strong>Request and Response Follow:</strong> As a shortcut for creating Request objects you can use response.follow and request.follow. </p> <pre class="wp-block-preformatted"><strong>import</strong> <strong>scrapy</strong> <strong>class</strong> <strong>QuotesSpider</strong>(scrapy.Spider): name = "quotes" start_urls = [ 'http://quotes.toscrape.com/page/1/', ] <strong>def</strong> parse(self, response): <strong>for</strong> quote <strong>in</strong> response.css('div.quote'): <strong>yield</strong> { 'text': quote.css('span.text::text').get(), 'author': quote.css('span small::text').get(), 'tags': quote.css('div.tags a.tag::text').getall(), } next_page = response.css('li.next a::attr(href)').get() <strong>if</strong> next_page <strong>is</strong> <strong>not</strong> <strong>None</strong>: <strong>yield</strong> response.follow(next_page, callback=self.parse)</pre> <p>For <strong><a></strong> elements there is a shortcut response.follow uses their href attribute automatically. So the code can be shortened.</p> <pre class="wp-block-preformatted"><strong>for</strong> a <strong>in</strong> response.css('li.next a'): <strong>yield</strong> response.follow(a, callback=self.parse)</pre> <p><strong>There are many more things to discuss in the Scrapy framework. This is only a basic idea of the Scrapy framework and how it is better for others.</strong></p> <p>We can also say that <strong>Scrapy</strong> is a crawler, <strong>Beautiful Soup</strong> is a parsing library.</p> <p>You could consider <strong>Beautiful Soup</strong> has fewer options than <strong>Scrapy</strong>. In other words, with <strong>Beautiful Soup</strong>, you need to provide a <strong>specific URL</strong>, and Beautiful Soup will help you get the data from that page. You can give <strong>Scrapy</strong> a start URL, and it will go on, <em><strong>crawling</strong></em> and <strong>extracting data</strong>, without having to provide it with every single URL explicitly.</p> <p>That is a basic explanation of what web-scraping is and information about some of its library and framework.</p> </article> </div> </section> <aside id="meta"> <div> <section> <h4 id="date"> Wed Mar 11, 2020 </h4> <h5 id="wordcount"> 2597 Words </h5> </section> <div class="blog-tags-wrapper" id="tags"> <a href='https://excellencetechnologies.in/tags/backend'>backend</a> <a href='https://excellencetechnologies.in/tags/database'>, database</a> <a href='https://excellencetechnologies.in/tags/python'>, python</a> </div> </div> <div class="fb-share-button" data-href="https://excellencetechnologies.in/blog/introduction-to-web-scraping/" data-layout="button" data-size="small"> <a target="_blank" rel="noopener" href="https://www.facebook.com/sharer/sharer.php?u=https%3a%2f%2fexcellencetechnologies.in%2fblog%2fintroduction-to-web-scraping%2f&src=sdkpreparse" class="fb-xfbml-parse-ignore">Share</a> </div> <a href="http://twitter.com/share?text=Introduction%20to%20Web%20Scraping&url=https%3a%2f%2fexcellencetechnologies.in%2fblog%2fintroduction-to-web-scraping%2f" class="twitter-hashtag-button" data-show-count="false">Tweet </a> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> <h3>See Also</h3> <ul class="list-unstyled other-posts"> <li><a href="/blog/integrating-rest-api/">Integrating Rest API</a></li> <li><a href="/blog/scrapy-settings-items-pipeline/">Scrapy-settings, Items, pipeline</a></li> </ul> <div class="other-posts"> <a class="previous" href="https://excellencetechnologies.in/blog/integrating-rest-api/"> Integrating Rest API</a> <a class="next" href="https://excellencetechnologies.in/blog/django-polymorphic-model/"> Django Polymorphic Model</a> </div> <div id="disqus_thread"></div> </aside> <link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/10.0.3/styles/default.min.css"> <script src="//cdnjs.cloudflare.com/ajax/libs/highlight.js/10.0.3/highlight.min.js"></script> <script> document.addEventListener('DOMContentLoaded ', (event) => { document.querySelectorAll('code ').forEach((block) => { hljs.highlightBlock(block); }); }); </script> <div id="fb-root"></div> <script async defer crossorigin="anonymous" src="https://connect.facebook.net/en_GB/sdk.js#xfbml=1&version=v7.0&appId=386827718017067&autoLogAppEvents=1"></script> <script> var disqus_config = function() { this.page.url = "https:\/\/excellencetechnologies.in\/blog\/introduction-to-web-scraping\/"; }; (function() { var d = document, s = d.createElement('script'); s.src = 'https://excellencetechnologies-in.disqus.com/embed.js'; s.setAttribute('data-timestamp', +new Date()); (d.head || d.body).appendChild(s); })(); </script> <noscript>Please enable JavaScript to view the <a href="https://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript> <script id="dsq-count-scr" src="//excellencetechnologies-in.disqus.com/count.js" async></script> </div> <div class="right col-lg-3 blog-right-content"> <div class="right-side-posts"> <h5 class="title"><span>Recent Posts</span></h5> <ul class="list-unstyled"> <li> <a href="/blog/">Best Blogs with Better Ideas for Your Startup Business</a> </li> <li> <a href="/blog/what-to-choose-for-your-next-app-flutter-vs-react-native/">What to choose for your next app? Flutter vs. React Native</a> </li> <li> <a href="/blog/how-to-setup-vuex-in-vuejs/">How to setup Vuex in Vuejs</a> </li> <li> <a href="/blog/use-of-firebase-for-building-a-web-app-a-worthy-platform/">Firebase for building a Web App: A worthy platform</a> </li> <li> <a href="/blog/vue-js-instant-prototyping-test-your-vue-components-in-no-time/">Vue Js Instant Prototyping | Test your Vue components in no time</a> </li> <li> <a href="/blog/create-web-components-using-vue-js/">Create Web Components using Vue Js</a> </li> </ul> </div> <div class="right-side-posts"> <h5 class="title"><span>Categories</span></h5> <ul class="list-unstyled"> <li> <a href="/categories/advanced">advanced</a> </li> <li> <a href="/categories/angular">angular</a> </li> <li> <a href="/categories/angularjs">angularjs</a> </li> <li> <a href="/categories/back-end-amp-database">back-end-amp-database</a> </li> <li> <a href="/categories/beginner">beginner</a> </li> <li> <a href="/categories/blockchain">blockchain</a> </li> <li> <a href="/categories/cloud-infra-and-dev-ops">cloud-infra-and-dev-ops</a> </li> <li> <a href="/categories/deep-learning">deep-learning</a> </li> <li> <a href="/categories/devops">devops</a> </li> <li> <a href="/categories/directives">directives</a> </li> <li> <a href="/categories/django">django</a> </li> <li> <a href="/categories/ec2">ec2</a> </li> <li> <a href="/categories/ecommerce">ecommerce</a> </li> <li> <a href="/categories/express">express</a> </li> <li> <a href="/categories/flutter">flutter</a> </li> <li> <a href="/categories/general">general</a> </li> <li> <a href="/categories/graphql">graphql</a> </li> <li> <a href="/categories/ionic-framework">ionic-framework</a> </li> <li> <a href="/categories/machine-learning">machine-learning</a> </li> <li> <a href="/categories/magento">magento</a> </li> <li> <a href="/categories/mean-stack">mean-stack</a> </li> <li> <a href="/categories/mobile-apps">mobile-apps</a> </li> <li> <a href="/categories/mongodb">mongodb</a> </li> <li> <a href="/categories/mongoose">mongoose</a> </li> <li> <a href="/categories/nlp">nlp</a> </li> <li> <a href="/categories/nodejs">nodejs</a> </li> <li> <a href="/categories/phonegap">phonegap</a> </li> <li> <a href="/categories/python">python</a> </li> <li> <a href="/categories/react">react</a> </li> <li> <a href="/categories/react-native">react-native</a> </li> <li> <a href="/categories/responsive">responsive</a> </li> <li> <a href="/categories/responsive-design">responsive-design</a> </li> <li> <a href="/categories/socket.io">socket.io</a> </li> <li> <a href="/categories/uncategorized">uncategorized</a> </li> <li> <a href="/categories/vuejs">vuejs</a> </li> <li> <a href="/categories/web-application">web-application</a> </li> </ul> </div> </div> </div> </div> </div> <div style="clear:both"></div> <div class="footerRow"> <div> <div class="headingBox"> <p class="footerHeading expanding-link">ALWAYS STAY IN THE LOOP</p> </div> <div class="socialBox"> <div> <a href="https://www.linkedin.com/company/excellence-technologies" target="_blank" rel="noopener"><img class="socialImage" alt="excellence-social-linkdin" src="/assets/social/linkedin-white.svg"></a> </div> <div class="socialImgBox"> <a href="https://www.facebook.com/ExcellenceTechnologies/" target="_blank" rel="noopener"><img class="socialImage" alt="excellence-social-facebook" src="/assets/social/fb-white.svg"></a> </div> <div> <a href="https://www.instagram.com/excellencetech/" target="_blank" rel="noopener"><img alt="excellence-social-instagram" class="socialImage" src="/assets/social/instagram-white.svg"></a> </div> <div class="socialImgBox"> <a onclick="window.location.href='skype:sales@excellencetechnologies'" target="_blank" rel="noopener"><img class="socialImage" alt="excellence-social-skype" src="/assets/social/skype-white.svg"></a> </div> </div> <p class="copyright">© 2020 - Excellence Technologies Pvt. Ltd.<br> All Rights Reserved</p> </div> <div class='rowForService footerListRow'> <div class="footerListBox"> <div class="headingBox"> <p class="footerHeading expanding-link">QUICK LINKS</p> </div> <ul class="footerList"> <li class=""> <a href="/"> <span>Home</span> </a> </li> <li class=""> <a href="/about/"> <span>About</span> </a> </li> <li class=""> <a href="/services/web-app/"> <span>Services</span> </a> </li> <li class=""> <a href="/jobs/"> <span>Career</span> </a> </li> <li class=""> <a href="/portfolio/"> <span>Portfolio</span> </a> </li> <li class=""> <a href="/contact-us/"> <span>Contact</span> </a> </li> </ul> </div> <div class="footerListBox"> <div class="headingBox"> <p class="footerHeading expanding-link">SERVICES</p> </div> <ul class="footerList"> <li class=""> <a href="/services/web-app/"> <span class="service-footer">Web Application Development</span> </a> </li> <li class=""> <a href="/services/mobile-services/"> <span class="service-footer">Mobile Application Development</span> </a> </li> <li class=""> <a href="/services/e-commerce/"> <span class="service-footer">eCommerce Store Setup</span> </a> </li> <li class=""> <a href="/services/backend-api-database/"> <span class="service-footer">Backend and Database</span> </a> </li> <li class=""> <a href="/services/cloud-devops/"> <span class="service-footer">Cloud Deployments & Devops</span> </a> </li> </ul> </div> <div class="footerListBox"> <div class="headingBox"> <p class="footerHeading expanding-link">TECHNOLOGIES</p> </div> <div class="technologiesListRow"> <div> <ul class="footerList"> <li class=""> <a href="/hire-us/magento-development/"> <span>Magento</span> </a> </li> <li class=""> <a href="/hire-us/reactjs-development/"> <span>ReactJS</span> </a> </li> <li class=""> <a href="/hire-us/react-native-development/"> <span>React Native</span> </a> </li> <li class=""> <a href="/hire-us/vuejs-development/"> <span>VueJS</span> </a> </li> <li class=""> <a href="/hire-us/angular-development/"> <span>AngularJS</span> </a> </li> <li class=""> <a href="/hire-us/android-app-development/"> <span>Android App</span> </a> </li> <li class=""> <a href="/hire-us/mongodb-development/"> <span>MongoDB</span> </a> </li> </ul> </div> <div class="technologiesSecondList"> <ul class="footerList"> <li class=""> <a href="/hire-us/expressjs-development/"> <span>Express</span> </a> </li> <li class=""> <a href="/hire-us/rest-api-development/"> <span>API Development</span> </a> </li> <li class=""> <a href="/hire-us/shopify-development/"> <span>Shopify</span> </a> </li> <li class=""> <a href="/hire-us/laravel-development/"> <span>Laravel</span> </a> </li> <li class=""> <a href="/hire-us/flask-development/"> <span>Flask</span> </a> </li> <li class=""> <a href="/hire-us/django-development/"> <span>Django</span> </a> </li> <li class=""> <a href="/hire-us/nodejs-development/"> <span>NodeJS</span> </a> </li> </ul> </div> </div> </div> </div> </div> </body> </html>