Just a web crawler

gh-dependents

Actions Status Web Test codecov

gh command extension to see dependents of your repository.

screenshot

See The GitHub Blog: GitHub CLI 2.0 includes extensions!

Install

gh extension install otiai10/gh-dependents

Usage

gh dependents otiai10/lookpath
# gh dependents {user}/{repo}

Advanced Usage

gh dependents -v -t=json otiai10/lookpath
# -v to show verbose log
# -t=json to output in JSON format template

How it works

  • This command just crawls /network/dependents page of your repository.

Issues and Feature Request

Owner
Hiromu OCHIAI
🙋 ❤️ 🍣
Hiromu OCHIAI
Similar Resources

High-performance crawler framework based on fasthttp

predator / 掠食者 基于 fasthttp 开发的高性能爬虫框架 使用 下面是一个示例,基本包含了当前已完成的所有功能,使用方法可以参考注释。

May 2, 2022

A crawler/scraper based on golang + colly, configurable via JSON

A crawler/scraper based on golang + colly, configurable via JSON

Aug 21, 2022

crawlergo is a browser crawler that uses chrome headless mode for URL collection.

crawlergo is a browser crawler that uses chrome headless mode for URL collection.

A powerful browser crawler for web vulnerability scanners

Dec 29, 2022

A crawler/scraper based on golang + colly, configurable via JSON

Super-Simple Scraper This a very thin layer on top of Colly which allows configuration from a JSON file. The output is JSONL which is ready to be impo

Aug 21, 2022

New World Auction House Crawler In Golang

New-World-Auction-House-Crawler Goal of this library is to have a process which grabs New World auction house data in the background while playing the

Sep 7, 2022

Simple content crawler for joyreactor.cc

Simple content crawler for joyreactor.cc

Reactor Crawler Simple CLI content crawler for Joyreactor. He'll find all media content on the page you've provided and save it. If there will be any

May 5, 2022

A PCPartPicker crawler for Golang.

gopartpicker A scraper for pcpartpicker.com for Go. It is implemented using Colly. Features Extract data from part list URLs Search for parts Extract

Nov 9, 2021

Multiplexer: HTTP-Server & URL Crawler

Multiplexer: HTTP-Server & URL Crawler Приложение представляет собой http-сервер с одним хендлером. Хендлер на вход получает POST-запрос со списком ur

Nov 3, 2021

A simple crawler sending Telegram notification when Refurbished Macbook Air / Pro in stock.

A simple crawler sending Telegram notification when Refurbished Macbook Air / Pro in stock.

Jan 30, 2022
Comments
  • Specify offset in request

    Specify offset in request

    Hey there!

    Thanks for building out this extension, it's awesome. One thing that I would love the ability to do is to start from a specific page offset (what looks like the dependents_after query param in the UI) For analyzing a larger repo, the current rate limiting back off might not be enough - it seems to bust at around page 30-40 for me. I'm wondering how much effort it would be to plumb through a command line flag to set the starting dependents_after value so it can pick up from a previous cursor.

    (Side note: I just noticed there's a package vs. repository filter on the Network Dependents page - would that be simple enough to wire up, too?)

  • 2021/08/31 23:19:57 429 Too Many Requests

    2021/08/31 23:19:57 429 Too Many Requests

    gh dependents -v moby/moby
    # ...
    [Page  58] https://github.com/moby/moby/network/dependents?dependents_after=MTY0NTUyODc1NzA	= 1498
    [Page  59] https://github.com/moby/moby/network/dependents?dependents_after=MTY0NDI5OTIzOTQ	= 1528
    [Page  60] https://github.com/moby/moby/network/dependents?dependents_after=MTY0NDI5Njg3OTc	= 1558
    [Page  61] https://github.com/moby/moby/network/dependents?dependents_after=MTY0NDI5MzM4OTQ	= 1588
    [Page  62] https://github.com/moby/moby/network/dependents?dependents_after=MTY0NDI4NTk2NzE2021/08/31 23:19:57 429 Too Many Requests
    $
    
Distributed web crawler admin platform for spiders management regardless of languages and frameworks. 分布式爬虫管理平台,支持任何语言和框架
Distributed web crawler admin platform for spiders management regardless of languages and frameworks. 分布式爬虫管理平台,支持任何语言和框架

Crawlab 中文 | English Installation | Run | Screenshot | Architecture | Integration | Compare | Community & Sponsorship | CHANGELOG | Disclaimer Golang-

Jan 7, 2023
ant (alpha) is a web crawler for Go.

The package includes functions that can scan data from the page into your structs or slice of structs, this allows you to reduce the noise and complexity in your source-code.

Dec 30, 2022
Apollo 💎 A Unix-style personal search engine and web crawler for your digital footprint.
Apollo 💎 A Unix-style personal search engine and web crawler for your digital footprint.

Apollo ?? A Unix-style personal search engine and web crawler for your digital footprint Demo apollodemo.mp4 Contents Background Thesis Design Archite

Dec 27, 2022
Fast, highly configurable, cloud native dark web crawler.

Bathyscaphe dark web crawler Bathyscaphe is a Go written, fast, highly configurable, cloud-native dark web crawler. How to start the crawler To start

Nov 22, 2022
A recursive, mirroring web crawler that retrieves child links.

A recursive, mirroring web crawler that retrieves child links.

Jan 29, 2022
Fast golang web crawler for gathering URLs and JavaSript file locations.

Fast golang web crawler for gathering URLs and JavaSript file locations. This is basically a simple implementation of the awesome Gocolly library.

Sep 24, 2022
Elegant Scraper and Crawler Framework for Golang

Colly Lightning Fast and Elegant Scraping Framework for Gophers Colly provides a clean interface to write any kind of crawler/scraper/spider. With Col

Jan 9, 2023
Pholcus is a distributed high-concurrency crawler software written in pure golang
Pholcus is a distributed high-concurrency crawler software written in pure golang

Pholcus Pholcus(幽灵蛛)是一款纯 Go 语言编写的支持分布式的高并发爬虫软件,仅用于编程学习与研究。 它支持单机、服务端、客户端三种运行模式,拥有Web、GUI、命令行三种操作界面;规则简单灵活、批量任务并发、输出方式丰富(mysql/mongodb/kafka/csv/excel等

Dec 30, 2022
:paw_prints: Creeper - The Next Generation Crawler Framework (Go)
:paw_prints: Creeper - The Next Generation Crawler Framework (Go)

About Creeper is a next-generation crawler which fetches web page by creeper script. As a cross-platform embedded crawler, you can use it for your new

Dec 4, 2022
Go IMDb Crawler
 Go IMDb Crawler

Go IMDb Crawler Hit the ⭐ button to show some ❤️ ?? INSPIRATION ?? Want to know which celebrities have a common birthday with yours? ?? Want to get th

Aug 1, 2022