Python Web Applications

27.03.2025
40

The Python language, which is very functional in many areas, is also used with Django to make web pages.

Python Web Applications

Before explaining Python Web Application architecture, there are a few things we need to mention. There are many 3rd party web applications for creating a website. The usage rate of internationally recognized applications such as WordPress (php&mysql) and Wix is very high.

GitHub Project Link:
https://github.com/omersahintr/BootCampEdu/tree/main/NetProject

This is the main reason why third-party applications like WordPress and Wiz are preferred,

  • It is constantly kept up to date,
  • Accessibility,
  • Open source,
  • Being improvable,
  • Free of charge or with an economic fee.

reasons such as can be listed. However, there is a limit to what you can do with these ready-made sites. In addition, it becomes uneconomical when professional plugins and themes are involved.

This is where terms like Asp.Net, PHP, JS, Css, Java, Html and Python come to our aid. If a site is to be prepared by going in depth on a subject, WordPress or Wix type sites cannot help us fully.

In this article, we will try to shed light on how to write a website in Python Web environment, more precisely, how to write applications with internet access.

python web application

Http Web Protocol Operations

Generally, in web http, get and post methods are used to send data (parameter or query) to the server and wait for a response. While there is no limit to the size (length) of the data sent with the post method, the get method is limited by the limits of the address line.

Even Post is often used. The reason is that it is unlimited and sends queries privately.

Python Web Interface (Frontend) and Background Coding (Backend)

Let’s examine how to develop applications with Python for a specific purpose beyond making a website. Let’s start working with the Json, Csv or Text-based data set we have.

Pulling, Running and Generating Results from Online Data Sets

requests() Method

Let’s start by importing the requests library that comes installed in Python.

  • req = requests.get(“https://omersahin.com.tr/login.php”, auth=(‘username’,’password))
import requests as req


## request method
r  = req.get('https://omersahin.com.tr/')

print(f"Status: {r.status_code} \n" #status code:200 OK
    f"Encoding:{r.encoding} \n" #encoding: utf-8
    f"Text:{r.text} \n Headers:{r.headers}") #text: html content, headers: dictionary
Python

A query can be sent to any site on the Internet(http/https) with the get procedure. This query will return us a result if it is a query that complies with server policies. The username and password information to log in to the site can be sent with the auth= parameter.

Data Exchange with get() Method

With the request() method, the url address is sent to the server and Http codes are returned as a result. As an example, let’s check whether the json file containing earthquake data in our GitHub BootCamp project exists with the get procedure.

import requests as req

## request method
r  = req.get('https://github.com/omersahintr/BootCampEdu/blob/main/NetProject/earth-quake.json')

print(r) #<Response [200]>
Python

Some of the codes called Http Response and their meanings:

Request CodeMeaning
1XX (100-199)Informational
2XX (200-299)Success
3XX (300-399)Redirection
4XX (400-499)Client Error
5XX (500-599)Server Error

The most common codes you will encounter will be 200, 301, 302, 304, 401, 403, 404, 500, 501 and 510.

If you wish, you can check the response codes with the if line. For example

import requests as req

## request method
r  = req.get('https://raw.githubusercontent.com/omersahintr/BootCampEdu/main/NetProject/earth-quake.json')

if r.status_code == 200:
    print("Wonderful") #Wonderful
    print(r.status_code) #200
Python

To access a Json file on GitHub from outside with the requests method, use https://raw.githubusercontent.com/ instead of https://github.com/ . So the path to your Json file in GitHub should be

https://raw.githubusercontent.com/omersahintr/BootCampEdu/main/NetProject/earth-quake.json

Data Extraction with request.get()

With the text() method, the content of the url queried by request.get() can be printed on the screen. Let’s write our code by defining a function called pull_json.

import requests as req

## request method
def pull_json(url):
    r  = req.get(url)
    content = r.text
    
    if r.status_code == 200:
        print(content) #the content is typing

pull_json('https://raw.githubusercontent.com/omersahintr/BootCampEdu/main/NetProject/earth-quake.json')
Python

If you wish, you can import the JSON file directly with the json() method.

import requests as req

## request method
def pull_json(url):
    r  = req.get(url)
    content = r.json()
    
    if r.status_code == 200:
        print(content) #the content is typing

pull_json('https://raw.githubusercontent.com/omersahintr/BootCampEdu/main/NetProject/earth-quake.json')
Python

Querying JSON Data

import requests as req

## lookup for json file
def pull_json_lookup(url_json):
    resp  = req.get(url_json)   
    if resp.status_code == 200:
        for found in resp.json():
            if found["location"] == "Ege Denizi":
                print(found["location"], found["magnitude"], found["depth"])

pull_json_lookup('https://raw.githubusercontent.com/omersahintr/BootCampEdu/main/NetProject/earth-quake.json')
Python

Screen Output:

  • Aegean Sea 2.6 5.9
  • Aegean Sea 2.7 7
  • Aegean Sea 2.7 7
  • Aegean Sea 2.6 7
  • Aegean Sea 2.5 5.59
  • Aegean Sea 3 7

Requesting with Post Method

You can see the put method with its results in one example. We can use the url“https://jsonplaceholder.typicode.com/todos” to try the post method.

import requests as req

## POST Example:
myToDo = {
    "userId": 2,
    "title": "try first post",
    "completed": False
}
MyPostUrl = "https://jsonplaceholder.typicode.com/todos"
post_response = req.post(MyPostUrl, json=myToDo)
print(post_response.json()) 
      #{'userId': '2', 'title': 'try first post', 'completed': 'False', 'id': 201}
Python

Request with Get Method

import requests as req

## GET Example:
myGetUrl = "https://jsonplaceholder.typicode.com/todos"
response = req.get(myGetUrl)
print(response.json())
Python

Requesting with the Put Method

It is used to replace the entire data with a specific ID from the beginning to the end. It is used more often than Patch.

import requests as req

#PUT Example:
PutUrl = "https://jsonplaceholder.typicode.com/todos/2"
MyPut = {
    "userId": 2,
    "title": "try second put",
    "completed": False
}
MyPutUrl = req.put(PutUrl, json=MyPut)
print(MyPutUrl.json())
    # {'userId': 2, 'title': 'try second put', 'completed': False, 'id': 2}
Python

Requesting with Patch Method

It is similar to the put method, but is only used to change one or more parameters. It is not preferred very much. If a request is made to change data, the Put method is preferred.

import requests as req

 # PATCH Examples:
PatchUrl = "https://jsonplaceholder.typicode.com/todos/2"
MyPatch = {
    "title": "try last patch"
}   
MyPatchUrl = req.patch(PatchUrl, json=MyPatch)
print(MyPatchUrl.json())
    #{'userId': 2, 'title': 'try last patch', 'completed': False, 'id': 2}
Python

Delete Method Usage

It is used to completely delete a row (id) in the data set. Let’s see this in Python code.

import requests as req

# DELETE Example:
deleteUrl = "https://jsonplaceholder.typicode.com/todos/2"
myDelete = req.delete(deleteUrl)
print(myDelete.json())
    #{}  #Empty because the data has been deleted. only--> {}
Python

Chapter Ending Monster Apps

Subdomain Tester

Let’s write an application to control the subdomains of a site. It will be an application that can generally be used for cyber security purposes. First, create a text file in the directory where the project is running.

domainList.txt :

admin
drive
wp
login
gpt
ai
buy
video
image
code
test
import requests as req

import requests as req

def do_request(url):
    try:
        return req.get(url)
    except req.exceptions.ConnectionError:
        pass
        

with open("domainList.txt","r") as f:
    for key in f:
        url = "https://"+key.strip()+"."+"google.com" # strip is clear space in data
        response = do_request(url)
        if response:   #response is not None         
            print(url, response)
        else:
            print(url, "Connection Error")
            #https://admin.google.com <Response [200]>
            #https://drive.google.com <Response [200]>
            #https://wp.google.com Connection Error
            #https://login.google.com Connection Error
            #https://gpt.google.com Connection Error
            #https://ai.google.com <Response [200]>
            #https://buy.google.com Connection Error
            #https://video.google.com <Response [200]>
            #https://image.google.com <Response [200]>
            #https://code.google.com <Response [200]>
Python

Sitemap Creation

We can create a sitemap of any website. We will map completely by going through HTML tags. We will follow url link tags starting with .We can create a sitemap of any website. We will map completely by going through HTML tags. We will follow url link tags starting with <a href=””>.

import requests as req
from bs4 import BeautifulSoup

siteUrl = "https://omersahin.com.tr"
foundLinks = []

def make_req(url):
    spider = req.get(url)
    soup = BeautifulSoup(spider.text, "html.parser")
    return soup

# HTML Parse Process:

def crawler(url):
    countLink = 1
    links = make_req(url)

    for linkhref in links.find_all("a"): #<a href="https....">
        foundLink = linkhref.get("href") # href="https...."

        if foundLink:
            if "#" in foundLink:
                foundLink = foundLink.split("#")[0]
            if foundLink not in foundLinks:
                foundLinks.append(foundLink)
                print(countLink, "-", foundLink)
                crawler(foundLink) #recursive process.
        countLink += 1
try: #for error passing
    crawler(siteUrl)
except:
    pass
Python

Screen Output:

1 – https://www.omersahin.com.tr
2 – https://www.omersahin.com.tr/web/google-seo/
3 – https://www.omersahin.com.tr/reklam/
4 – https://www.omersahin.com.tr/web/adsense/
5 – https://www.omersahin.com.tr/web/adwords/

Print and Count Headings and Subheadings (Heading-h1,h2,h3) on a Site

In the same way, if we want to calculate the number of header and subheader tags defined as
by printing them on the screen, it will be enough to write the following codes.In the same way, if we want to calculate the number of header and subheader tags defined as <h1><h2><h3> by printing them on the screen, it will be enough to write the following codes.

import requests as req
from bs4 import BeautifulSoup

url = input("Web Site: ")
connectUrl = req.get(url)
soupWebText = BeautifulSoup(connectUrl.text, "html.parser")


def h1_counter(url):
    i_1=0
    for h1s in soupWebText.find_all("h1"):
        if h1s.text != None:
            i_1+=1
            print("H1",h1s.text)
    print("H1 Count: ",i_1)
    h2_counter(url)

def h2_counter(url):
    i_2=0
    for h2s in soupWebText.find_all("h2"):
        if h2s.text != None:
            i_2+=1
            print("\tH2",i_2,"-", h2s.text)
    print("H2 Count: ",i_2)
    h3_counter(url)

def h3_counter(url):
    i3=0
    for h3s in soupWebText.find_all("h3"):
        if h3s.text != None:
            i3+=1
            print("\tH3", i3, "-",h3s.text)
    print("H3 Count: ", i3)
try:
    h1_counter(url) # h1 heading and chained to h2,h3 content
except:
    pass
Python

When we enter the address https://www.omersahin.com.tr/python-threading/, which is the link of this page we are on from the screen, the screen will be output as follows.

Screen Output:

MAKE A COMMENT
COMMENTS - 0 COMMENTS

No comments yet.

Bu web sitesi, bilgisayarınıza bilgi depolamak amacıyla bazı tanımlama bilgilerini kullanabilir.
Bu bilgilerin bir kısmı sitenin çalışmasında esas rolü üstlenirken bir kısmı ise kullanıcı deneyimlerinin iyileştirilmesine ve geliştirilmesine yardımcı olur.
Sitemize ilk girişinizde vermiş olduğunuz çerez onayı ile bu tanımlama bilgilerinin yerleştirilmesine izin vermiş olursunuz.
Çerez bilgilerinizi güncellemek için ekranın sol alt köşesinde bulunan mavi kurabiye logosuna tıklamanız yeterli. Kişisel Verilerin Korunması,
Gizlilik Politikası ve Çerez (Cookie) Kullanımı İlkeleri hakkında detaylı bilgi için KVKK&GDPR sayfamızı inceleyiniz.
| omersahin.com.tr |
Copyright | 2007-2025