IPFS Project && Working Group Roadmaps Repo

IPFS Project Roadmap v0.6.0

Table of Contents

IPFS Mission Statement

The mission of IPFS is to create a resilient, upgradable, open network to preserve and grow humanity’s knowledge.

This looks different! Want to participate in helping define our "Mission Statement 2.0"? Add your thoughts here!

2020 Priority

Scoping in to 2020 H1

Instead of a 2020 year-long plan, we decided to focus on a 2020 H1 plan (covering Q1 & Q2) so as to:

  • Enable our team to truly focus on one thing, complete it, and then move on to other challenges instead of doing many things at once
  • Better understand the components of each goal and plan our time accordingly to hit them by not trying to nail down plans too far into the future
  • Be adaptable and prepared for surprises, re-prioritizations, or market shifts that require us to refocus energy or change our plan in the course of the year

2020 H1 Priority Selection Criteria

Before selecting a 2020 H1 priority, we did an open call for Theme Proposals to surface areas the community felt were of high importance and urgency. We combined these great proposals with an analysis of the project, team, and ecosystem state - and the biggest risks to IPFS Project success. Out of that analysis, we identified there were two main aspects our 2020 H1 plan MUST address:

  1. Mitigate current IPFS pain points around network performance and end user experience that are hindering wider adoption and scale
  2. Increase velocity, alignment, and capacity for IPFS devs and contributors to ensure our time and efforts are highly leveraged (because if we can make fast, sustained, high-quality progress by leveling-up our focus and healthy habits, we can achieve our goals faster and ensure contributing to IPFS is fun and productive!)

📞 Content Routing

Given the selection criteria, our main priority for the first half of 2020 - the next 6 months - is improving the performance and reliability of content routing in the IPFS network. 'Content routing' is the process of finding a node hosting the content you're looking for, such that you can fetch the desired data and quickly load your website/dapp/video/etc. As the IPFS network scaled this past year (over 30x!), it ran into new problems in our distributed routing algorithms - struggling to find content spread across many unreliable nodes. This was especially painful for IPNS, which relied on multiple of these slow/unreliable queries to find the latest version of a file. These performance gaps caused IPFS to lag and stall while searching for the needed content, hurting the end user experience and making IPFS feel broken. Searching the network to find desired content (aka, using IPFS as a decentralized CDN) is one of the most common actions for new IPFS users and is required by most ipfs-powered dapp use cases - therefore, it's the number 1 pain point we need to mitigate in order to unlock increased adoption and scalability of the network!

We considered a number of other potential goals - especially all the great 2020 Theme Proposals - before selecting this priority. However, we decided it was more important to focus core working group dev time on the main blockers and pain points to enable the entire ecosystem to grow and succeed. Many of these proposals are actually very well suited for community ownership via DevGrants and collaborations - and some of them, like "IPFS in Rust" and "Examples and Tutorials", already have grants or bounties associated with them!

2020 Working Groups

The IPFS project includes the collective work of serveral focused teams, called Working Groups (WGs). Each group defines its own roadmap with tasks and priorities derived from the main IPFS Project Priority. To better orient around our core focus for 2020 H1, we created a few new working groups (notably "Content Routing"), and spun others down (notably our "Package Managers" working group). For 2020 H1, we have 5 main working groups - with our "Ecosystem" working group divided into 3 sub-groups.

Each WG’s top-line focus:

  • Content Routing: Ensure all IPFS users can find and access content they care about in a distributed network of nodes
  • Testground: Provide robust feedback loops for content routing development, debugging, and benchmarking at scale
  • Bifrost (IPFS Infra): Make sure our gateway and infra scale to support access to the IPFS network
  • Ecosystem: Ensure community health and growth through collaborations, developer experience and platform availability
    • Browsers / Connectivity: Maximize the availability and connectivity of IPFS on the web
    • Collabs / Community: Support IPFS users and grow new opportunities through research, collaborations and community engagement
    • Dev Ex: Support the IPFS technical community through documentation, contributor experience, API ergonomics and tooling
  • Project: Support team functioning, prioritization, and day-to-day operations

Looking for more specifics? Check out the docs on our team roles and structure!

2020 Epics

We've expanded our 2020 Priority into a list of Epic Endeavours that give an overview of the primary targets IPFS has for 2020 H1. If you are pumped about these Epics and want to help, you can get involved! See the call to action (CTA) for each section below.

1. Build IPFS dev capacity and velocity

In order to achieve our content routing goal for 2020 H1, we need to level up our own leverage, coordination, velocity, and planning as a project to ensure all contributors spend their time and energy effectively. This includes a few different dimensions:

  • Integrate research via the ResNetLab into our design practice to ensure our work builds on the knowledge and experience of leading researchers in our fields
  • Empower new contributors in the IPFS ecosystem through DevGrants and collaborations to upgrade and extend IPFS to solve new problems
  • Invest in developer tooling, automation, and fast feedback loops to accelerate experimentation and iteration
  • Upgrade project planning and management within and between working groups to ensure we define, estimate, track and unblock our work efficiently
  • Focus our attention on fewer things to improve completion rate and reduce churn, saying "not now" or finding other champions for nice-to-have projects in order to allocate energy and attention to the most important work

You can get involved with ResNetLab RFPs or by proposing/funding projects in the DevGrants repo!

2. Improve content routing performance such that 95th percentile content routing speed is <5s

Improving content routing performance requires making improvements and bugfixes to the go-libp2p DHT at scale, and changing how we form, query, and resolve content in the IPFS network to be faster and more scalable. This involves a combination of research, design, implementation, and testing. Making changes to the configuration of the entire network is non-trivial - that's why we've been investing in the InterPlanetary Testground, a new set of tools for testing next generation P2P applications, to help us diagnose issues and evaluate improvements prior to rolling out upgrades to the entire public network. You can track the work in these milestones on ZenHub:

If you want to help refine the detailed milestones, or take on some of the improvements required to hit this goal, see the Content Routing Work Plan to dive deeper!

3. Invest in IPFS community enablement and support

Supporting community health and growth continues to be a core focus for IPFS as we scale to more users, applications, and use cases. Refining our adoption pathways, continuing to grow platform availability, and supporting our collaborators to bring IPFS to new users and use cases helps us maximize the impact and value we create in the world.

  • Scale the number of users and applications supported by IPFS through talks, guides, and how-tos
  • Refine our APIs to simplify end-user adoption and maximize ease of use
  • Bring IPFS to browsers to maximize default availability and connectivity on the web
  • Continue impoving our new IPFS Docs Site, to ensure developer & user questions are clearly answered and actionable
  • Invest in explicit community stewardship responsibilities to ensure there are answers, tools, and fast feedback loops to support new IPFS users and contributors

Great ways to start helping enable the IPFS community include: suggesting or building new tools to support IPFS users, reviewing open PRs, answering questions on http://discuss.ipfs.io and on our IRC channels on freenode/matrix, or writing your own how-tos and guides to use IPFS for your use case!

2019 Priority

Our core goal for 2019 was to make large-scale improvements to the IPFS network around scalability, performance, and usability. By focusing on the 📦 Package Managers use case, we hoped to identify, prioritize, and demonstrably resolve performance/usability issues, while driving adoption via a common and compelling use case that all developers experience daily. We hoped this focus would help us hone IPFS to be production ready (in functionality and practice), help scale network usage to millions of nodes, and accelerate our project and community growth/velocity.

Graded 2019 Epics

  1. The reference implementations of the IPFS Protocol (Go & JS) become Production Ready 🔁
  2. Support Software Package Managers in entering the Distributed Web ❗️
  3. Scale the IPFS Network 🔁
  4. The IPFS Community of Builders gets together for the 1st IPFS Conf
  5. IPFS testing, benchmarks, and performance optimizations 🔁
  6. Support the growing IPFS Community

You can see the details in what work we took on in each milestone, and which we achieved in the archived 2019 IPFS Project Roadmap.

Sorting Function

D = Difficulty (or "Delta" or "Distance"), E = Ecosystem Growth, I = Importance

To identify our top focus for 2019 and rank the future goals in our upgrade path, we used a sorting function to prioritize potential focus areas. Each goal was given a score from 1 (low) - 5 (high) on each axis. We sorted first in terms of low difficulty or "delta" (i.e. minimal additional requirements and fewer dependencies from the capabilities IPFS has now), then high ecosystem growth (growing our community and resources to help us gravity assist and accelerate our progress), and finally high importance (we want IPFS to have a strong, positive impact on the world). Future goals below are listed in priority order using this sorting function.

📦 Package Managers (D1 E5 I3)

The most used code and binary Package Managers are powered by IPFS.

Package Managers collect and curate sizable datasets. Top package managers distribute code libraries (eg npm, pypi, cargo, ...), binaries and program source code (eg apt, pacman, brew ...), full applications (app stores), datasets, and more. They are critical components of the programming and computing experience, and are the perfect use case for IPFS.

Most package managers can benefit tremendously from the content-addressing, peer-to-peer, decentralized, and offline capabilities of IPFS. Existing Package Managers should switch over to using IPFS as a default, or at least an optional way of distributing their assets, and their own updates. New Package Managers should be built entirely on IPFS. --- Code libraries, programs, and datasets should become permanent, resilient, partition tolerant, and authenticated, and IPFS can get them there. Registries become curators and seeds, but do not have to bear the costs of the entire bandwidth. Registries could become properly decentralized. --- We have a number of challenges ahead to make this a reality, but we are already beyond half-way. We should be able to get top package managers to (a) use IPFS as an optional distribution mechanism, then (b) use that to test all kinds of edge cases in the wild and to drive performance improvements , then (c) get package managers to switch over the defaults.

Future Goals

🗂 Large Files (D1 E4 I3)

By 2020, IPFS becomes the default way to distribute files or collections of files above 1GB

HTTP is not good for distributing large files or large collections of small files. Anything above 1GB starts running into problems (resuming, duplication, centralized bandwidth limitations, etc). BitTorrent works well for single archives that won't change, or won't duplicate, but fails in a number of places. IPFS has solved most of the hard problems but hasn't yet made the experience so easy that people default to IPFS to distribute large files. IPFS should solve this problem so well, it should be so easy and delightful to use, and it should be so high performance that it becomes the default way to move anything above 1GB world-wide. This is a massive hole right now that IPFS is well-poised to fill -- we just need to solve some performance and usability problems.

🔄 Decentralized Web (D2 E4 I3)

IPFS supports decentralized web apps built on p2p connections with power and capabilities at the edges.

In web 2.0, control of the web is centralized - its location-addressing model and client-server architecture encourage reliance and trust of centralized operators to host services, data, and intermediate connections. Walled gardens are common, and our data is locked into centralized systems that increase the risk of privacy breaches, state control, or that a single party can shut down valued services. The decentralized web is all about peer-to-peer connections and putting users in control of their tools and data. It does this by connecting users directly to each other and using verifiable tools like hash-links and encryption to ensure the power and control in the network is retained by the participants themselves. The decentralized web (as distinguished from Distributed Web) is NOT about partition tolerance, or making the web work equally well in local-area networks/mobile/offline - the focus here is on the control and ownership of services.

IPFS has solved most of the hard underlying design problems for decentralized web, but hasn't yet made the experience easy enough for end-users to experience it in the applications, tools, and services they use. This requires tooling and solutions for developers to sustainably run their business without any centralized intermediary facilitating the network (though centralized providers may still exist to augment and improve the experience for services that already work decentralized by design). Designing Federation for interop with current systems is key for the Migration Path.

🔒 Encrypted Web (D2 E3 I4)

Apps and Data are fully end-to-end encrypted at rest. Users have reader, writer, and usage privacy.

Apps and user data on IPFS are completely end-to-end encrypted, at rest, with only users having access. Users get reader and writer privacy by default. Any nodes providing services usually do so over encrypted data and never get access to the plain data. The apps themselves are distributed encrypted, decrypted and loaded in a safe sandbox in the users' control. Attackers (including ISPs) lose the ability to spy on users' data, and even which applications users are using. This works with all top use case apps -- email, chat, forums, collaboration tools, etc.

♻️ Distributed Web (D2 E2 I4)

Info and apps function equally well in local area networks and offline. The Web is a partitionable fabric, like the internet.

The web and mobile -- the most important application platforms on the planet -- are capable of working entirely in sub-networks. The norm for networked apps is to use the available data and connections, to sync asynchronously, and to leverage local connectivity protocols. The main apps for every top use case work equally well in offline or local network settings. It means IPFS and apps on top work excellently on desktops, the browser, and mobile. Users can use webapps like email, chat, forums, social networks, collaboration tools, games, and so on without having to be connected to the global internet. Oh, and getting files from one computer to another right next to it finally becomes an easy thing to do (airdrop level of easy).

👩🏽‍💻 Personal Web (D3 E4 I2)

Personal Data and programs are under user control.

The memex becomes reality. The web becomes a drastically more personal thing. Users' data and exploration is under the users' control -- similar to how a "personal computer" is under the user's control, and "the cloud" is not. Users decide which apps and other people get access to their data. Explorations can be recorded for the user in memex fashion. The user gets to keep copies of all the data they have observed through the web. A self-archiving personal record forms, which the user can always go back to, explore, and use -- whether or not those applications are still in development by their authors.

👟 Sneaker Web (D3 E2 I4)

The web functions over disconnected sneaker networks, spreading information, app data, apps, and more.

The web is capable of working fully distributed, and can even hop across disconnected components of the internet. Apps and their data can flow across high latency, intermittent, asynchronous links across them. People in disconnected networks get the same applications, the same quality of experience, and the same ability to distribute their contributions as anybody in the strongest connected component ("the backbone of the internet"). The Web is totally resistant to large scale partitions. Information can flow so easily across disconnected components that there is no use in trying to block or control information at the borders.

🚀 Interplanetary Web - Mars 2024. (D3 E3 I4)

Mars. Let's live the interplanetary dream!**

SpaceX plans to land on mars in 2022, and send humans in 2024. By then, IPFS should be the default/best choice for SpaceX networking. The first humans on mars should use IPFS to run the top 10 networked apps. That means truly excellent and well-known IPFS apps addressing the top 10 networked use cases must exist. For that to happen, the entire system needs to be rock solid, audited, performant, powerful, easy-to-use, well known, and so on. It means IPFS must work on a range of platforms (desktop, servers, web, mobile), and to work with both special purpose local area networks, and across interplanetary distances. If we achieve this, while solving for general use and general users (not specifically for the Mars use case, then IPFS will be in tremendous standing.

💾 Packet Switched Web (D3 E2 I3)

IPFS protocols use packet switching, and the network can relay all kinds of traffic easily, tolerating switch failures.

The infrastructure protocols (libp2p, IPFS, etc.) and the end-user app protocols (the logic of the app) can work entirely over a packet switching layer. Protocols like BitSwap, DHT, PubSub become drastically higher performance, and unconstrained by packets sent before theirs. Web applications can form their own isolated virtual networks, allowing their users to distribute the packets. Users can form their own groups and their own virtual networks, allowing users to only operate in a subnet they trust, and ensure all of their traffic is moving between trusted switches. The big public network uses packet switching by default.

📑 Data Web (D4 E3 I3)

Large Datasets are open, easy to access, easy to replicate, version controlled, secure, permanent.

We constantly lose access to important information, either because it ceases to exist or simply due to virtual virtual barriers (i.e. censorship, lack of connectivity and so on). Information also often loses its way into the peers that most needed it and there aren't good ways to signal that some dataset wasn't contemplated, referenced. We want to improve this dramatically, making the data that is produced more easy to access through making it versionased, secure and easier to replicate and locate.

✉️ Package Switched Web (D4 E2 I2)

Data in the web can be moved around over a package switching network. Shipping TB or PB hard drives of data becomes normal.

Beyond circuit switching and packet switching, the web works over package switching! It is possible to send apps, app assets, app user generated data, and so on through hard drives means. This means that the network stack and the IPLD graph sync layers are natively capable of using data in external, removable media. It is easy for a user Alice to save a lot of data to a removable drive, for Alice to mail the drive to another user Bob, and for Bob to plug in the drive to see his application finish loading what Alice wanted to show Bob. Instead of having to fumble with file exports, file systems, OS primitives, and worse -- IPFS, libp2p, and the apps just work -- there is a standard way to say "i want this data in this drive", and "i want to use the data from this drive". Once that happens, it can enable proper sneakernet web.

Self-Archiving Web (D4 E4 I4)

The Web becomes permanent, no more broken Links. Increase the lifespan of a Webpage from 6 months to ∞ (as good as a book).

The Internet Archive(s, plural) Content Address their snapshots to maximize deduplications and hit factor. IPFS becomes the platform that enables the multiple Internet Archives to store, replicate and share responsibility over who possesses what. It becomes simple for any institution (from large organization to small local library) to become an Internet Archive node. Users can search through these Internet Archives nodes, fully compliant with data protection laws.

🏷 Versioning Datasets (D4 E3 I3)

IPFS becomes the default way to version datasets, and unlocks a dataset distribution and utility explosion similar to what VCS did for code.

IPFS emerged from dataset versioning, package management, and distribution concerns. There are huge gaping holes in this space because large datasets are very unwieldy and defy most systems that make small files easy to version, package, and distribute. IPFS was designed with this kind of problem in mind and has the primitives in place to solve many of these problems. There are many things missing: (a) most importantly, a toolchain for version history management that works with these large graphs (most of what git does). (b) Better deduplication and representation techniques. (c) Virtual filesystem support -- to plug under existing architectures. (d) Ways to easily wrap existing data layouts (filestore) -- to plug on top existing architectures. (e) An unrelenting focus on extremely high performance. (f) primitives to retrieve and query relevant pieces of versioned datasets (IPLD Selectors and Queries). --- But luckily, all of these things can be added incrementally to enhance the tooling and win over more user segments.

🗃 Interplanetary DevOps (D4 E2 I2)

Versioning, packaging, distribution, and loading of Programs, Containers, OSes, VMs, defaults to IPFS.

IPFS is great for versioning, deduping, packaging, distributing assets, through a variety of mediums. IPFS can revolutionize computing infrastructure systems. It has the potential to become the default way for datacenter and server infrastructure users to set up their infrastructure. This can happen at several different layers. (a) In the simplest sense, IPFS can help distribute programs to servers, by sitting within the OS, and plugging in as the downloading mechanism (replace wget, rsync, etc.). (b) IPFS can also distribute containers -- it can sit alongside docker, kubernetes, and similar systems to help version, dedup, and distribute containerized services. (c) IPFS can also distribute OSes themselves, by plugging in at the OS package manager level, and by distributing OS installation media. (d) IPFS can also version, dedup, and distribute VMs, first by sitting alongside host OSes and hypervisors moving around VM snapshots, and then by modeling VMs themselves on top of IPFS/IPLD. --- To get there, we will need to solve many of the same problems as package managers, and more. We will need the IPLD importers to model and version the media super-effectively.

📖 The World's Knowledge becomes accessible through the DWeb (D5 E2 I5)

Humanity deserves equal access to the Knowledge. Platforms such as Wikipedia, Coursera, Edx, Khan Academy and others need to be available independently of Location and Connectivity. The content of this services need to exist everywhere. These replicas should be part of the whole world's dataset and not disjoint dataset. Anyone should be able to access these through the protocol, without having to deploy new services per area.

🌐 WebOS (D5 E2 I3)

The Web Platform and the OS'es merge.

The rift between the web and the OS is finally healed. The OS and local programs and WebApps merge. They are not just indistinguishable, they are the same thing. "Installing" becomes pinning applications to the local computer. "Saving" things locally is also just pinning. The browser and the OS are no longer distinguishable. The entire OS data itself is modelled on top of IPLD, and the internal FileSystem is IPFS (or something on top, like unixfs). The OS and programs can manipulate IPLD data structures natively. The entire state of the OS itself can be represented and stored as IPLD. The OS can be frozen or snapshotted, and then resumed. A computer boots from the same OS hash, drastically reducing attack surface.

Owner
IPFS
A peer-to-peer hypermedia protocol
IPFS
Comments
  • Asks for libp2p team 2019 Roadmap

    Asks for libp2p team 2019 Roadmap

    We need to have concrete actionable asks for the libp2p team to be surfaced at their meetup next week and tentatively incorporated into their 2019 roadmap planning.

    Rough ideas from various 2019 planning discussions:

    • DHT crawler and debugging tool customization -- one command tell us what is going on
    • Fast (<5 sec) mutable name resolution for any IPNS record
      • ipns-pubsub stable and enabled by default (package manager need from https://github.com/ipfs/notes/issues/366)
    • "There is a set of runnable benchmarks which can measure real world data transfer speed of the go-IPFS system as a whole against traditional file exchange tools" (a shared item with go-ipfs - https://github.com/ipfs/team-mgmt/pull/794)
    • "Total wall-clock time for finding via the DHT and fetching data doesn’t exceed 3s (on average) for first byte across various node configurations (ex geographical distance)."
    • p2p transport (aka bluetooth or equivalent)
      • support for a variety of device types (desktop/mobile/IoT)
      • support for nearby node discovery and fully p2p (offline) discovery
    • Ability to add a 1m sharded index without disabling content routing

    @ipfs/wg-captains @ipfs/go-team @ipfs/javascript-team - can you think of additional requests we should be surfacing to the libp2p team?

  • Tackle tracking protection

    Tackle tracking protection

    As conventional web become more popular, advertisers realized it could be a perfect vehicle for their business. Today we're tracked through the web to drive that business. Some browser vendors are putting a lot of effort against trackers. As I understand use of DHTs makes tracking even easier and as IPFS gets more popular it would attract same actors. Furthermore it creates a huge risk to people leaving under censorship as through tracking prosecutors will be gain ability to discover people accessing censored content.

    Maybe evaluating papers that attempt to solve this e.g. Octopus: A Secure and Anonymous DHT Lookup would be a worthy goal for the roadmap.

  • How can ProtoSchool best support the IPFS project?

    How can ProtoSchool best support the IPFS project?

    As we build the roadmap for ProtoSchool, we'd like to take into account the priorities of the IPFS team and plan for some tutorial content that best highlights your most common or most prioritized use cases or features.

    For a sense of what ProtoSchool is capable of, please take a look at the existing tutorials, which run in-browser and (with the exception of the first tutorial on the list) offer coding challenges following the introduction of various content step-by-step. Beginner-friendliness is a major priority for this project, and as we consider requests from project teams we will also focus on ensuring that appropriate scaffolding exists to get users to the point where they can successfully approach those proposed topics.

    Could you please take a look at your project roadmap and help me understand what ProtoSchool tutorial content might most help you achieve your goals for 2019 and 2020?

    Do you have upcoming events where you hope to offer workshops? If you could envision that content fitting with the ProtoSchool tutorial format, please be sure to include these ideas and share the relevant event dates.

    cc @mikeal

  • [2020 Theme Proposal] IPFS Cluster Applications

    [2020 Theme Proposal] IPFS Cluster Applications

    Note, this is part of the 2020 Theme Proposals Process - feel free to create additional/alternate proposals, or discuss this one in the comments!

    Theme description

    Many people come to IPFS believing that simply the act of adding/pinning a file enables instance distributed, redundant, permanent storage of arbitrary data among (presumably) peer nodes. This is sadly far from the truth, and can lead to people leaving the community feeling let down once realized - not likely to return. But with the right applications, incentives, and defaults set to easily enable peer groups to self organize and provide this idealized dream of IPFS for themselves (at least).

    This is, IMHO, the first step to true general purpose decentralized applications.

    Core needs & gaps

    As an end user and/or dapp creator, I want the default behavior to host and request for others to host a set of common, collectively valued, data among a peer group. At present, this takes a lot of research and configuration to achieve, if at all.

    Why focus this year

    • IPFS Cluster is recently getting to the point that it can deliver this solution.
    • Great work on permissioned and private networks on IPFS (#44, textile, peergos, ect.) that can enable configurable clusters sharing private and secure files among a sub-network.

    Milestones & rough roadmap

    • A minimal forkable set of examples akin to, or incorporated with, those found in JS IPFS to build off of.
    • An full-fledged application incubated by our PL/IPFS community that uses cluster to highlight this use case. Examples:
      • An auto-replicating "who is online" guestbook webapp. It allows for some to join the peer group , add your peer ID, sign this (cryptographicly). to showup online you must have all data for the app hosted on your node so others can get the app from you, and you must be online to remain on the log. A cute way to illistrait a true serverless dapp
      • A community shared database. Would include assets that all participants would store and relay redundantly. This could include things like a group website, photo album, chat app, and wiki/docs - so no server is needed. Only at least one member of the community to have their node running for the resources to be accessible. (Something I personally would love to get involved in and gather community support to build - see here - could fit nicely into community engagement goals (#42) )
      • A group password/keystore backup where sharded anonymous data are spread randomly across a small permissioned network such that only the owner of the keystore could know and privately collect the chunks needed to reconstruct their data. No one else on this network could (trivially) discover any keystore file, despite holding fragments of many of them.

    Desired / expected impact

    How will we measure success? What will working on this problem statement unlock for future years?

    • Increased grass-roots use of IPFS
    • Increased IPFS clients/nodes providing a useful service while online, thus high uptime and avalibility on the network to be expected
    • Decreased reliance on central gateways, increased community hosted gateways
    • Decreased reliance on central servers/resources for dapps in general with use of clusters of dapp users. - true dapps!
  • The IPFS Project Roadmap

    The IPFS Project Roadmap

    The IPFS Project Roadmap is here \o/

    It is with great pleasure to announce on behalf of all the extraordinary humans that are part of the IPFS Org that we now have a full IPFS Project Roadmap!

    image

    This Roadmap will stay in review stage until the first week of January 2019, when we tick the version to v1.0.0 and do a broadcast introducing it to the whole World. Until then, all the feedback is very much appreciated, feel welcome to post comments on this thread directly! Thank you!

  • PL EngRes IPFS Stewards / IPFS-in-JS Roadmap

    PL EngRes IPFS Stewards / IPFS-in-JS Roadmap

    "Pomegranate" is the temporary codename for the new IPFS implementation in JavaScript which will be developed starting in late 2022.

    For a link to the Roadmap, and a place to share comments, see:

    • https://github.com/ipfs/pomegranate/issues/5
  • [2021 Theme Proposal] Increase max block size / defualt to blake2b-256

    [2021 Theme Proposal] Increase max block size / defualt to blake2b-256

    Note, this is part of the 2021 IPFS project planning process - feel free to add other potential 2021 themes for the IPFS project by opening a new issue or discuss this proposed theme in the comments, especially other example workstreams that could fit under this theme for 2021. Please also review others’ proposed themes and leave feedback here!

    Theme description

    One thing that limits IPFS usage for large datasets is its very small block size. 256kb is too small, and the largest size (at least for files) is 1mb. For better disk performance a minimum of 4mb would be needed. This will also drastically improve performance when using cloud backings for data storage.

    Hypothesis

    Large datasets for repeatable research are a pain to move around, ipfs can fix this.

    Vision statement

    allow bitswap to handel 4mb files.

    Why focus this year

    The useage is growing and moving to a 4mb default will result in a 16x reduction in overhead. Also makes it performant to use hdd as backing.

    Example workstreams

    Please list relevant workstreams, development milestones, and a high-level timeline for these efforts.

    Other content

    Please include links to other relevant content, notes, etc.

  • [2021 Theme Proposal] Permissionless Front-End

    [2021 Theme Proposal] Permissionless Front-End

    Note, this is part of the 2021 IPFS project planning process - feel free to add other potential 2021 themes for the IPFS project by opening a new issue or discuss this proposed theme in the comments, especially other example workstreams that could fit under this theme for 2021. Please also review others’ proposed themes and leave feedback here!

    Theme description

    Please describe the objective of your proposed theme, what problem it solves, and what executing on it would mean for the IPFS Project.

    Hypothesis

    Please describe the core hypotheses that you would need to believe for this theme to make sense as a 2021 IPFS project theme.

    Vision statement

    Please describe what the state of the IPFS project would look like if execution of this theme is massively successful.

    Why focus this year

    Please discuss why 2021 is the right year for this theme.

    Example workstreams

    Please list relevant workstreams, development milestones, and a high-level timeline for these efforts.

    Other content

    Please include links to other relevant content, notes, etc.

    At the current moment, DDToken.crypto is hosted on IPFS but we would like a permissionless front-end to match a permissionless back-end for DDToken.io and understand this is one of the best applications of Filecoin for Decentralized Finance (DeFi). I believe that IPFS is working with Filecoin and we would definitely like a permissionless front-end for DDToken.io.

    https://twitter.com/DDGaddis/status/1305633199242969088?s=20

  • Descope Go-IPFS 2019 roadmap for Package Managers priority

    Descope Go-IPFS 2019 roadmap for Package Managers priority

    This is just a first pass at reformatting our 2019 roadmap to narrow in on package manager support. I think there are more of these that we should proactively drop - and more that we know now that we should add. @Stebalien @eingenito @ipfs/go-team for thoughts and improvements!

  • Add a note to the roadmap about the February update

    Add a note to the roadmap about the February update

    I added a note about the February 3 update to the roadmap to more clearly describe our current state - where the project-level roadmap describes 1 main top-level priority, while the working group roadmaps all focus on 5.

    Proposal: I think we should merge the CURRENT 2019 working group roadmaps prior to any descoping changes. This helps us:

    1. reflect our current incremental status (project goals updated, WGs working to rescope), and the working group priorities that drove our Q1 OKRs,
    2. better support our current community members who want to easily reference what the WGs are focusing on tactically, and
    3. document the history of what we planned to take on and then descoped - along with the rhetoric for why.

    We already want to do descoping passes in a separate PR - why not commit the first batch now so that WG roadmaps are easy to find and reference?

  • Exercise: Allocating WG Roadmap Milestones by Quarter

    Exercise: Allocating WG Roadmap Milestones by Quarter

    Hello IPFS WGs - @ipfs/wg-captains, @ipfs/contributors! We’re trying to solidify Q1 OKRs by end of next week (1/18)! Now that we have defined our goals for 2019 in this repo, we are excited to start using our 2019 Working Group Roadmaps to gain perspective and confidence that our quarterly efforts put us on track to meet our yearly objectives. To help us do this, the Project Working Group prototyped and refined a short ~15-25 minute exercise that we encourage all working groups to do (either sync or async) as a quick feedback mechanism for our Q1 OKRs. As a benefit - it helps us tighten up our 2019 Working Group Roadmaps and make them more actionable and informative for the wider community. 🎉

    The exercise:

    • Step 1 (1 min): Divide milestones amongst team members - either with small groups owning sets of milestones (ex “Package Managers”, “Large Files”, “Production Ready”), or individuals taking ownership of specific milestones.
    • Step 2 (5-10 mins): Within Github, a Google Doc, or a Cryptpad - team members assign each milestone a quarter, or, if the milestone will span multiple quarters of work, break the milestone down into sequential “Parts 1, 2, 3” of work that are each assigned a quarter. In addition, team members give a rhetoric for why that quarter (either with a quick verbal explanation, or by writing a comment - ex, “I think we need to do benchmarking this quarter so we can prioritize among improvements to X the next quarter”).
    • Step 3 (1-5 min): Rearrange milestones and parts into the “Timeline” section of each WG Roadmap by quarter and look at the distribution of work. Modify and iterate as needed to make sure one quarter in particular isn’t overloaded with too many large commitments for specific resources.
    • Step 4 (5-10 mins): Look back at your Q1 OKRs - do your key results align with the work described in the Q1 section of your 2019 roadmap? If not, reflect on what work is important to prioritize this quarter to most efficiently reach our 2019 goals (ex through accomplishing prerequisites, accelerating development, simplifying maintenance, or utilizing ecosystem effects) and add/remove KRs accordingly.

    What we did in our Project WG meeting:

    • We allocated 1 project wg member per 2019 priority area and spent 5 minutes silently doing steps 2 & 3 (could have easily been done async).
    • We then took an additional ~15 minutes of our meeting to walk through each milestone and let each individual explain their rhetoric (ex past/future dependencies, ecosystem effects, etc). As a group, we moved a few milestones around based on load distribution and other comments.
    • (WIP) Async, we compared our Q1 milestones and Q1 OKRs to note discrepancies and ensure our goals put us on track for future quarters. Want to see it in action? Watch the recording of our meeting! Feel free to zoom through our 5 mins of silence where we did steps 2&3 async or just watch us talk through an example. ;)

    Goals and benefits:

    1. Waiting to chart the quarter until we're doing that quarter's OKRs can easily cause us to run out of time to handle important tasks by not starting early enough. If we take a stab at allocating our quarters out right now, we can try and rebalance (and proactively readjust quarters that seem overloaded) to make sure we get the most important work accomplished.
    2. Assigning milestones a quarter estimate helps community members understand when to interface with us on a particular effort on our 2019 roadmap.
    3. Our dependencies (chunks of work that build on each other) get more clearly defined by charting incremental work across quarters - and it helps us ensure we’ll have time to complete dependencies before the efforts that depend on them.
    4. In addition to “priority”, thinking about allocating milestones in time helps us proactively prioritize items that we expect to accelerate our development or create ecosystem effects within our community.

    Excited to see how this helps other WGs get clarity and excitement for what we plan to accomplish this quarter! And pumped to get all these awesome PRs merged. =]

  • “Slimmer Kubo”: Trim out unnecessary code

    “Slimmer Kubo”: Trim out unnecessary code

    eta:2023-03-31

    description: Clean up old, non-critical functionality from Kubo to limit supported surface area and help team focus on strategic priorities/functionality.

    Kubo today has become a bit bloated with functionality that is not critical to our key use cases, has not been touched in a long time, etc. We then get issues opened against that functionality or questions about it, which diverts team time/resources away from key strategic work. We would like to clean some of this out of Kubo to help us be more efficient/effective, and focused on top priorities.

    Done:

    Remove some specific things: Fuse, graphsync, others based on analysis This actually requires a multi-part deprecation strategy, so we would start with deprecation, with code removal occurring potentially 6 months later.

  • Implement default content routing selection

    Implement default content routing selection

    eta: 2023-06-30

    description: Depends on:

    Indexer Double Hashing Support Spec for Content Routing Selection Done:

    Full support (Kubo + any necessary infrastructure) to support content routing selection, including default use of cid.content by Kubo

  • Verifiable retrieval in light clients

    Verifiable retrieval in light clients

    eta: 2023-03-31

    description: Theme: Ubiquitous Clients

    Deliver Rust library for client-side verifiable retrieval in light clients.

    Depends on:

    Enable verifiable retrieval on the gateway Done:

    Deliver Rust library for client-side verifiable retrieval in light clients Notes:

    We need to ensure we have alignment on use of Rust here with stakeholders We believe it’s stategic for IP Stewards to invest in building muscle with Rust, and this is a good starting point

  • Indexer double hashing support

    Indexer double hashing support

    eta: 2023-03-31

    description: Define and deliver support for double hashing with indexers from Kubo to enhance IPFS privacy.

    Shared effort with Probelab, Bedrock, and libp2p.

  • libipfs

    libipfs

    eta: 2023-03-31

    description: Core Kubo functionality is refactored into a library used both by Kubo as well as other implementers who want to support additional use cases without blocking on Kubo maintainers.

    Kubo has a bunch of battle-tested code over the years. Unfortunately it’s not easy to consume as a library because of the repo sprawl and having to pull in all the relevant pieces. It’s also hard for maintainers to keep it updated and we’re reliant on Kubo as the delivery vehicle to specify compatible versions. Seeing the success go-libp2p has had moving to a monorepo, we will move to a similar pattern where Kubo specifics stay in Kubo but the generally usable parts for anyone writing an IPFS implementation in Go can live in libipfs

    Done: Core Kubo functionality is packaged into a library used both by Kubo as well as other implementers who want to support additional use cases without blocking on Kubo maintainers

OpenFunction Demo Repo

OpenFunction Demo Installation Setup a Cluster minikube start -p demo --kubernetes-version=v1.22.2 --network-plugin=cni --cni=calico Install OpenFunct

Nov 21, 2021
A starter repo for VS Code, Gitpod, etc for Go.

go-starter After using this template: Replace github.com/mattwelke/go-starter in go.mod after using this template. Replace go-starter in .gitignore wi

Nov 26, 2021
A controller(CES) for controlling container egress traffic. Working with F5 AFM.
A controller(CES) for controlling container egress traffic. Working with F5 AFM.

Container Egress Services (CES) Kubernetes is piloting projects transition to enterprise-wide application rollouts, companies must be able to extend t

Oct 18, 2022
Working through the book "Practice Microservices" in Go

Practical Microservices in Go I'm working through the book Practical Microservices which uses Node as the teaching lanugage. I'm not interested in pra

Jan 11, 2022
Go microservice tutorial project using Domain Driven Design and Hexagonal Architecture!

"ToDo API" Microservice Example Introduction Welcome! ?? This is an educational repository that includes a microservice written in Go. It is used as t

Jan 4, 2023
Demo project for unit testing presentation @ GoJKT meetup

go-demo-service Demo project for unit testing presentation @ GoJKT meetup This is a demo project to show examples of unit testing for GoJKT meetup Use

Jul 10, 2021
Trying to build an Ecommerce Microservice in Golang and Will try to make it Cloud Native - Learning Example extending the project of Nic Jackson

Golang Server Project Best Practices Dependency Injection :- In simple words, we want our functions and packages to receive the objects they depend on

Nov 28, 2022
Kyoto starter project
Kyoto starter project

kyoto starter Quick Start project setup What's included kyoto kyoto uikit tailwindcss How to use Clone project with git clone https://github.com/yurii

Apr 10, 2022
This is demo / sample / example project using microservices architecture for Online Food Delivery App.

Microservices This is demo / sample / example project using microservices architecture for Online Food Delivery App. Architecture Services menu-servic

Nov 21, 2022
Wake up, Samurai. We have a project to build
Wake up, Samurai. We have a project to build

kyoto uikit UIKit for rapid development Requirements kyoto page configured SSA basic knowledge of kyoto (twui only) configured tailwindcss Installatio

Jun 27, 2022
This project extends the go-chi router to support OpenAPI 3, bringing to you a simple interface to build a router conforming your API contract.

Go OpenAPI This project extends the go-chi router to support OpenAPI 3, bringing to you a simple interface to build a router conforming your API contr

Mar 27, 2022
Golang Caching Microservice (bequant.io test project to get hired)

bequant test project How to use: Simply type docker-compose up -d --build db warden distributor in terminal while in project's directory. MySQL, Warde

May 14, 2022
Study Project for the application of micro services and requisition controls

Starting Endpoint GO with Retry Request Install GoLang for Linux Tutorial: LINK

Jul 4, 2022
Assignment2 - A shared project making use of microservice architecture

This project is a shared project making use of microservice architecture, API's and a simple frontend to implement a start-up new concept called EduFi. The concept combines education and financial systems to create profit from studying.

Jan 26, 2022
This project implements p11-kit RPC server protocol, allowing Go programs to act as a PKCS #11 module without the need for cgo

PKCS #11 modules in Go without cgo This project implements p11-kit RPC server protocol, allowing Go programs to act as a PKCS #11 module without the n

Nov 30, 2022
MadeiraMadeira boilerplate project to build scalable, testable and high performance Go microservices.

MadeiraMadeira boilerplate project to build scalable, testable and high performance Go microservices.

Sep 21, 2022
Targetrwe api test - This project provides the backend service for the targetrwe test application

Targetrwe-api This project provides the backend service for the targetrwe test a

Feb 15, 2022
IPFS Cluster - Automated data availability and redundancy on IPFS
IPFS Cluster - Automated data availability and redundancy on IPFS

IPFS Cluster - Automated data availability and redundancy on IPFS

Jan 2, 2023
Ipfs-retriever - An application that retrieves files from IPFS network

ipfs-retriever This is an application that retrieves files from IPFS network. It

Jan 5, 2022
lightning - forward messages between a qq group and a telegram group

lightning The purpose of this project is to forward messages between a qq group and a telegram group. Getting Started Clone this project: git clone ht

Nov 9, 2022