What dns server is never the authoritative source for a domain, but only serves to resolve names?

DNS reconnaissance

Allan Liska, Geoffrey Stowe, in DNS Security, 2016

Collection of Query Data

DNS queries can be monitored and logged in at least three places besides the client itself: the recursive resolver, the authoritative server, and the network connecting them all. The threat profile is often different for each piece of infrastructure and ultimately will depend on the practices of the organizations running those services. For example, large technology companies may have more robust security practices, but will happily collect and analyze the data themselves for advertising purposes. Small offshore providers may not collect anything, but could be more vulnerable to hacks. In general, an administrator should understand what data is leaving his network, what entities have access to it, and how it could be used in the future.

For administrators who run their own recursive resolvers, they control exactly what data is collected and they set the retention policies. For those who use a public or third-party recursive server, they need to be aware of how their query data is stored and used. One example is Google’s Public DNS service, which has a clear policy. They will store detailed records, including the source IP and query, for 24–48 hours in a “temporary log.” They then purge the source IP and create a “permanent” log with information like the requested domain, user’s region or city, and timing of responses. They also state that they “don’t correlate or combine information from our temporary or permanent logs with any personal information that you have provided Google for other services.”11 OpenDNS, another publicly available recursive server, does not provide as much detail on their data handling. But they do say “OpenDNS stores certain DNS, IP address and related information about you to improve the quality of our Service, to provide you with Services and for internal business and analysis purposes.”12 Some competing services advertise that they do not log any data at all. DNS.Watch, a service based out of Germany, is one example. Depending on the sensitivity of an organization’s DNS records, some administrators will be fine with recursive providers storing aggregate information but others will want complete anonymity. So with some effort, administrators can usually find recursive DNS providers that will follow the level of privacy they desire.

Authoritative servers, by contrast, should generally be assumed to log and retain all data they receive. Sometimes this is part of their business, as web site visits can be considered “intent” signals, which are used in ad targeting. Often times this data also comes via webserver logs, since DNS queries frequently precede web page requests. Many large hosting providers do not differentiate between DNS data and web data in their privacy policies, but just say that they may collect IP addresses and other online activity. This level of monitoring will come as no surprise to most security professionals, and probably not to most Internet users in general. But one potentially overlooked threat vector is the concentration of DNS providers. As stated in RFC 7626:

among the Alexa Top 100K, one DNS provider hosts today 10% of the domains. The ten most important DNS providers host together one third of the domains. With the control (or the ability to sniff the traffic) of a few name servers, you can gather a lot of information.

Research from Google and Inria, a French institute, showed that the majority of Internet users could be uniquely identified after visiting four web sites.13 With large concentrations of DNS from multiple web sites being stored in the same place, the possibility for de-anonymization becomes more likely. As described earlier in this book, authoritative servers will not necessarily see the true source of a query, since it may come through intermediate resolvers. Combined with the effect of caching, authoritative servers will not have nearly as pure a source of data as the researchers worked with. But even small batches of de-anonymized data would present a very sensitive source of information. An oft-cited example is that knowing a specific person has regularly visited a forum for alcoholics support is an extremely personal piece of information.

Network operators theoretically have access to all DNS traffic that passes through their links, but they often have some level of legal or policy restrictions on what they can do with the data. One exception is they almost always have the ability to monitor and collect any traffic in order to run their business and maintain the integrity of the infrastructure. Some of this is definitional; a network operator, of course, needs to view at least some part of a packet in order to perform its business of routing it to the correct place. For example, Time Warner, one of the largest Internet providers in the United States, says in its subscriber privacy notice that “we may collect personally identifiable information (described below) over a cable system without your consent if it is necessary to provide our services to you or to prevent unauthorized access to services or subscriber data.” While the policy does not mention DNS specifically, it does differentiate between the content of Internet traffic and aggregate statistics. For example, it says “[i]f you use a web-based email service, we do not collect any information regarding the emails that you send and receive.” But they do “have information about how often and how long you use our service, including the amount of bandwidth used; technical information about your computer system, its software and modem; and your geographical location.” They may use this information “to make sure you are being billed properly for the services you receive; to send you pertinent information about our services; to maintain or improve the quality of the TWC Equipment…[and] to market Time Warner Cable Services and other products that you may be interested in.”14

Comcast, another large Internet provider, also says they may “collect and store for a period of time, personally identifiable and non-personally identifiable information about you when you use our high-speed Internet.” Examples of an action that could be logged are to “send and receive e-mail, video mail, and instant messages” or “visit websites.”15 In 2014, AT&T generated some news stories when they offered a cheaper version of their fiber Internet service if users would agree to routine data collection.16

In the United States, most legal restrictions apply to telephone and video services, but not Internet access. Many other countries have adopted distinctions between content and aggregate information. An example policy from Australia holds that “carriers and carriage service providers are prohibited from using or disclosing any information which comes into their possession in the course of their business and which relates to [among other things] the contents of communications that are being or have been carried by carriers or carriage service providers.” Here too they include an exception for “business needs of other carriers or service providers” or “the performance of a person’s duties as an employee.”17 As a general rule, specific DNS queries will probably not be viewed by employees operating the Internet infrastructure. But if the queries are so sensitive that no one outside a trusted organization should ever have access, they should never leave the enterprise network unencrypted.

A related form of data collection is how ISPs retain IP assignment records. These tie individual customers to the IP addresses they were using on the Internet. The BitTorrent community is particularly active in tracking these policies because of lawsuits that have been filed over copyright infringement. According to their research, most major US ISPs retain IP records for between 6 and 12 months.18 For those less concerned about being accused of copyright violations, this data could still be sensitive if combined with DNS logs from another source because it can de-anonymize those queries.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B978012803306700005X

The Domain Name System

Walter Goralski, in The Illustrated Network (Second Edition), 2017

The DNS Hierarchy

DNS servers are arranged in a hierarchical fashion. That is, the hundreds of thousands of systems that are authoritative for the FQDNs in their zone are found at the bottom of the DNS “pyramid.” For ease of maintenance, when two or more DNS servers are involved only one of them is flagged as the primary server for the zone, and the rest become secondary DNS servers. Both are authoritative for the zone. ISPs typically run their own DNS servers, often for their customers, with the actual number of systems for each ISP depending on the size of the ISP. At the top of the pyramid is the “backbone.” There are root servers for the root zone and others for .com, .edu, and so on.

DNS servers above the local authoritative level refer other name servers to the systems beneath them, and when appropriate each name server will cache information. Information provided to hosts from any but the authoritative DNS system for the domain is considered non-authoritative, a designation not reflecting its reliability, but rather its derived nature.

Authoritative and non-authoritative servers can be further classified into categories. Authoritative servers can be:

Primary—The primary name server for a zone. Find its information locally in a disk file.

Secondary—One or more secondary name servers for the zone. They get their information from the primary.

Stub—A special secondary that contains only name server data and not host data.

Distribution—An internal (or “stealth server”) name server known only by IP address.

Keep in mind that the primary and secondary distinction is relevant only to the operator of the systems and not to the querier, who treats them all the same. Non-authoritative servers (technically, only the response is non-authoritative) can be:

Caching—Contain no local zone information. Just caches what it learns from other queries and responses it handles.

Forwarder—Performs the queries for many clients. Contains a huge cache.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128110270000230

Security and Robustness in the Internet Infrastructure

Krishna Kant, Casey Deccio, in Handbook on Securing Cyber-Physical Critical Infrastructure, 2012

28.2.2 Dependencies in the DNS

Name dependencies in the DNS exist both as part of its hierarchical structure and as a result of explicit configuration. Although these dependencies allow administrative flexibility to the DNS, they also add complexity, which can affect the availability and reliability of a domain name. Three specific DNS components introducing domain name dependencies are the following:

Parent zones

A resolver must learn the authoritative servers for a zone from referrals from the zone's hierarchical parent. For example, foo.net depends on the net authoritative servers to provide the proper delegation records for the name in question.

NS targets

The NS RR type uses names, rather than addresses, to specify authoritative servers for a zone, so a resolver must have the corresponding addresses to query the servers. In the case of NS target names that are subdomains of the referring (parent) zone (in-bailiwick), glue records may be introduced into the parent zone; for NS target names that are subdomains of the delegated zone, glue must be introduced into the parent zone. In either case, when glue exists, the addresses are made available by the parent in the referral, so there is no dependency. However, a resolver must independently resolve any other names. For example, if ns1.bar.com is among the NS target names for foo.net, a resolver obtaining such a referral must resolve ns1.bar.com to obtain its address.

Aliases

If a name is an alias (i.e., corresponds to a CNAME RR), then to obtain the DNS for the name, a resolver must subsequently resolve the alias target.

DNS dependencies are transitive and may be modeled as a directed graph reflecting dependency relationships [4, 5]. Figure 28-2 illustrates name dependencies for the foo.net name. The edges between elliptical nodes represent the name dependencies of different types.

Figure 28-2. The server dependency graph for foo.net. The gray, rectangular nodes represent name servers and the oval nodes represent domain names. Edges between one node and another represent a dependency of a domain name on another name or a server.

Edges between elliptical nodes (domain names) and rectangular nodes (IP addresses representing authoritative name servers) in Figure 28-2 represent server dependencies. Server dependencies stem from one of two different circumstances: a zone that has an in-bailiwick NS target name with a glue record (e.g., foo.net → 192.0.2.1) or a domain name that resolves to an Internet address (e.g., ns1.bar.com → 192.0.2.5).

Dependence for name resolution is modeled as a recurrence relation using the dependency graph. Each node representing a zone is dependent on both its parent zone and any one of its server or name dependencies. That is, its parent must be resolvable, so the delegation can be followed, and it must be able to query and receive a response from an authoritative server – either one provided by an in-bailiwick glue record or one learned by resolving the name of the NS target independently. Each node representing the name of a server is also dependent on its parent zone, as well as the address to which it resolves.

This recurrence produces a logical tree for determining the combinations of server availability that are necessary for a particular domain name to be resolved, such as that illustrated in Figure 28-3 for the dependencies from Figure 28-2. The foo.net zone depends on the net zone (parent) and either 192.0.2.1 (address supplied with in-bailiwick glue record), ns2.foo.net, ns1.bar.com, or ns3.bar.com (other NS target names). The ns1.bar.com domain name is dependent on the bar.com zone (parent) and 192.0.2.5 (the address to which it resolves), and so on.

Figure 28-3. A logical tree describing the availability of foo.net.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124158153000285

Layer 7: The Application Layer

In Hack the Stack, 2006

The DNS Lookup Process

Whenever an application requests information from the DNS database, it sends the request to a service on the local host called the resolver. The resolver checks the local cache, which holds the responses of recent requests. In addition, it may also check the hosts file, which is a local file containing mappings of host names to IP addresses. If the requested information is not available from any of these locations, the resolver sends a request to the DNS server that the host’s Transmission Control Protocol/Internet Protocol (TCP/IP) network interface is configured to use.

A DNS server can be configured as the authoritative server for a domain, which means that it is responsible for holding the DNS information for the domain, and any requests for the information are directed to that server. When the authoritative server receives a request, it looks up the information in its local database for that domain (or zone) and returns the answer.

Is a DNS server receives a request for which it is not the authoritative server, it does one of two things:

If the request is marked non-recursive, the server finds the address of the authoritative server for the requested domain and returns its address to the resolver. The resolver then directly contacts the authoritative server to obtain an answer to its request.

If the request is marked recursive, the server finds the address of the authoritative server, passes the request on to that server, and returns a response to the resolver.

If an answer or authoritative server for the request cannot be determined, the DNS server returns a message to the resolver stating that the answer is unknown.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781597491099500125

DNSSEC

Allan Liska, Geoffrey Stowe, in DNS Security, 2016

Other Uses of DNSSEC

With most of the core DNS infrastructure on the Internet now supporting DNSSEC, it presents an interesting opportunity. The infrastructure provides a way for clients to securely find authoritative servers for any part of the DNS namespace. It has many layers of caching that have been developed over more than two decades, and it is built for short-term expiration of data. So it is no surprise that people have proposed distributing TLS certificates via DNSSEC, for example. This process, originally proposed in RFC 6698, is called DNS-based Authentication of Named Entities (DANE). If widely implemented it could create competition for TLS CAs. DANE could also be used to distribute cryptographic keys for email, instant messengers, or other protocols.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128033067000103

Anycast and other DNS protocols

Allan Liska, Geoffrey Stowe, in DNS Security, 2016

Anycast Motivation

To understand the importance of anycast, one can start with some statistics about the DNS backbone. Recall that the “root” of DNS is the set of servers that will point to authoritative servers for .com, .net, and any other Top Level Domain (TLD).

Due to the maximum size of a DNS packet, the number of root servers is limited to 13. This calculation leaves room for headers and assumes the smallest possible owner names.1 The root servers are labeled a.root-servers.net through m.root-servers.net. Every time a client resolves a domain, assuming nothing has been cached, the first query will always be to retrieve a list of the root servers (technically this is a query for “.”), followed by a query to one of those root servers to resolve the TLD.

$ dig @8.8.8.8 . NS

; <<>> DiG 9.8.3-P1 <<>> @8.8.8.8 . NS

; (1 server found)

;; global options: +cmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31592

;; flags: qr rd ra; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:

;.    IN NS

;; ANSWER SECTION:

.   10404 IN NS i.root-servers.net.

.   10404 IN NS l.root-servers.net.

.   10404 IN NS f.root-servers.net.

.   10404 IN NS g.root-servers.net.

.   10404 IN NS d.root-servers.net.

.   10404 IN NS j.root-servers.net.

.   10404 IN NS a.root-servers.net.

.   10404 IN NS k.root-servers.net.

.   10404 IN NS m.root-servers.net.

.   10404 IN NS c.root-servers.net.

.   10404 IN NS e.root-servers.net.

.   10404 IN NS b.root-servers.net.

.   10404 IN NS h.root-servers.net.

Sometimes this query will include an “additional answers” section with A records that specify the IP for each server. This is optionally displayed by dig and may also depend on whether the recursive server forwards that information.

Estimates for the total load on root DNS servers range from hundreds of thousands of queries per second to millions per second and depend on the time of day and whether the infrastructure is under attack. In early 2016, the k-root server was handling between 40,000 and 60,000 queries per second.2 The load tended to peak around midday in the UTC time zone and hit a trough around midnight. On the same day, l-root experienced between 30,000 and 45,000 queries per second.3 Since the root servers are queried in a round-robin fashion, it is a reasonable approximation to multiply the load on one server by 13 and conclude that total root traffic is between 400,000 and 800,000 queries per second. In an extreme case, an attack against the root servers in November 2015 generated an estimated 5 million queries per second which was absorbed by the infrastructure.4 Cisco estimates that Internet traffic grew by a factor of 5 between 2010 and 20155 and total traffic at the Amsterdam Internet Exchange, one of the major peering points, grew at a similar rate.6 The number of DNS queries is not perfectly correlated with total Internet traffic since streaming video now dominates bandwidth, but it does give a rough approximation for future network growth. Based on this, it is reasonable to say the root DNS infrastructure will need to handle average loads of millions of queries per second, with a peak load several times that number.

How can an infrastructure handle this load and remain highly available? In designing the system for the root servers one would face at least two bottlenecks: network bandwidth and processor capacity. High end routers are generally designed to process packets at “line speed,” so a gigabit router handling nothing but 512-byte UDP DNS packets should be able to transfer around 250,000 packets per second. Routers may encounter other limitations like security filters or logging, but those either would not apply to a publicly available service like DNS or could be tuned away by experienced administrators. A bigger concern would be handling larger numbers of queries over TCP because those require more packets, and routers often have some overhead for each new connection. Also as DNSSEC becomes more widespread, it will add two or three times as many queries to perform validation. One could handle this by constantly running larger routers, and in fact the B root server takes this approach.7 Of course, upstream network capacity will have to be similarly provisioned to avoid creating bottlenecks.

The other constraint is processing capacity on the server. The root server needs to receive the DNS request, pass it through the IP stack, retrieve an answer from memory, create the response, and transmit a packet back out. A general rule of thumb is a server can handle tens of thousands of UDP packets per second without much tuning. Cloudflare recently reported on the limitations encountered when trying to scale this number as high as possible. A simple first approach is to write a loop that sends essentially empty packets using sendmsg and recvmsg, which can be used as bulk versions of the send and recv syscalls. This will send between 200,000 and 350,000 packets per second. To go beyond this limit, one must understand more about the specific network hardware and CPU being used. For example, most NIC cards have multiple send and receive queues that can be processed by different CPU cores in parallel. But this is often load-balanced depending on the source IP, destination IP, source port, and destination port. So a large amount of traffic on a single socket will bottleneck on a single CPU core. As described in their report, by spreading the traffic across multiple RX queues, multithreading the sending and receiving application, and keeping the threads accessing the same physical RAM, it is possible to send 1 million packets per second.8 This is not taking into account any processing to create the packets, just purely sending and receiving. A simple implementation of the root nodes would require a map of TLDs to SOA records, which would require at least two memory accesses for each response. Since local memory access is particularly important for maintaining packet throughput to the CPUs, any memory lookups will be in contention with the packet queues.

The final consideration is maintaining low latency on queries. A common threshold for operations to be considered fully interactive is 100 milliseconds. This supposedly originated in telephony systems, where people will begin to change their speaking patterns if the delay is longer than 100 ms.9 For DNS infrastructure, the ideal latency is at least half or a third of that number because there will often be multiple recursive queries, and the query will likely be followed by more network requests like downloading a webpage. The root servers publish monitoring data, and on a day in early 2016 the median latency over a 10-minute period ranged from 9 to 165 ms, with the majority being below 50 ms.10 For comparison, Netcraft periodically publishes a list of the “most reliable hosting companies” and in November 2015, the top 10 had DNS query latency of between 94 and 278 ms.11

The way the DNS root is able to achieve low latency despite such high load is by distributing the traffic over many servers located across different parts of the Internet. Since each DNS query can be processed independently, it is an easily parallelizable algorithm. But recall that each root server can only have a single IP address, since they must all fit in a single DNS packet. The way a single IP address can point to multiple servers in different parts of the Internet is a technique called anycast. This allows the L root server, for example, to operate more than 100 instances all using the same IP address.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128033067000115

MCSE 70-293: Planning, Implementing, and Maintaining a Name Resolution Strategy

Martin Grasdal, ... Dr.Thomas W. ShinderTechnical Editor, in MCSE (Exam 70-293) Study Guide, 2003

Planning DNS Server Placement

Considering where to place DNS servers, you should try to eliminate single points of failure to ensure the availability of DNS and AD services. This means that for every zone in your control, you should have at least two authoritative servers for fault tolerance. All DNS clients should be configured with the IP addresses of primary and at least one alternate DNS server to contact for name resolution. The following guidelines might assist in determining placement of your DNS servers:

On segmented LAN environments, you should have at least two authoritative servers. These servers should be installed on different subnets.

On a WAN, you should try to ensure that an authoritative DNS server is installed at each geographic location.

If you are hosting an authoritative DNS for your Internet-facing hosts such as your Web and mail servers, consider hosting an offsite secondary DNS server at your ISP or on your domain name registrar’s network.

Consider which services will be unavailable if the router fails on your network segment. For example, if you have a small branch office that lacks a domain controller, users will not be able to use the services provided by AD if the router fails. In this case, there might not be any advantage to deploying a secondary server that is authoritative for your AD zones.

Consider zone replication traffic across slow WAN links. If zone replication traffic consumes too much bandwidth, consider using forwarding servers in the remote location.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781931836937500105

Public Key Infrastructure

Terence Spies, in Computer and Information Security Handbook (Third Edition), 2017

Internet Engineering Task Force Open Pretty Good Privacy

The PGP public key system, created by Phillip Zimmermann, is a widely deployed PKI system that allows for the signing and encryption of files and email. Unlike the X.509 PKI architecture, the PGP PKI system uses the notion of a “Web of Trust” to bind identities to keys. The Web of Trust (WoT) [1] replaces the X.509 idea of identity binding via an authoritative server with identity binding via multiple semitrusted paths.

In a WoT system, the end user maintains a database of matching keys and identities, each of which is given two trust ratings. The first trust rating denotes how trusted the binding is between the key and the identity, and the second denotes how trusted a particular identity is to “introduce” new bindings. Users can create and sign a certificate, and import certificates created by other users. Importing a new certificate is treated as an introduction. When a given identity and key in a database are signed by enough trusted identities, that binding is treated as trusted.

Because PGP identities are not bound by an authoritative server, there is also no authoritative server that can revoke a key. Instead, the PGP model states that the holder of a key can revoke that key by posting a signed revocation message to a public server. Any user seeing a properly signed revocation message then removes that key from the database. Because revocation messages must be signed, only the holder of the key can produce them, so it is impossible to produce a false revocation without compromising the key. If an attacker does compromise the key, production of a revocation message from that compromised key actually improves the security of the overall system, because it warns other users not to trust that key.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B978012803843700048X

Analyzing Internet DNS(SEC) Traffic with R for Resolving Platform Optimization

Emmanuel Herbert, ... Maryline Laurent, in Data Mining Applications with R, 2014

15.1 Introduction

Domain Name System (DNS) (Mockapetris, 1987a,b) is the computer protocol that facilitates Internet communication using hostnames by matching an Internet Protocol (IP) address and a Fully Qualified Domain Name (FQDN), e.g., “www.google.com.” DNS servers, which host the IP addresses of the queried web sites—that is to say the DNS responses—are called Authoritative Servers. Because Authoritative Servers would not be able to support all end users’ queries, the DNS architecture introduces Resolving Servers that cache the responses during Time to Live (TTL) seconds. Internet Service Providers (ISPs) manage such servers for their end users. Thanks to the caching mechanism, Resolving Servers do not need to ask Authoritative Servers if the response is still in their cache. This provides faster responses to the end user and reduces the traffic load on the DNS Authoritative Servers.

For multiple reasons, ISPs consider operating DNSSEC, the security extension of DNS defined in the standards (Arends et al., 2005a,b,c; Sawyer, 2005). With DNSSEC, a DNS response is signed so that its authenticity (generation by a legitimate Authoritative Server) and its integrity (nonmodification of response) can be checked. With DNSSEC, resolutions require multiple signature checks so that responses are around seven times longer than traditional DNS responses. Migault (2010), Migault et al. (2010), and Griffiths (2009) show that DNSSEC resolution platforms require up to five times more servers than DNS resolution platforms. Migault et al. (2010) measures that a DNSSEC resolution involves three signature checks and costs up to 4.25 times more than a regular DNS resolution. With the DNS traffic doubling every year and the deployment of its secure extension DNSSEC, DNS resolving platforms require more and more resources.

The operational problem faced is to reduce the resources needed by a resolving platform. The resolving platform consists of several DNS resolving servers behind a load balancer device. The load balancer splits the incoming traffic to distribute queries on resolving servers. The classical way of load balancing is performed by assigning a pool of clients to be served to each server.

One way to reduce the load on a server is to lower the number of resolutions. To reduce the number of resolutions, Migault and Laurent (2011) and Francfort et al. (2011) evaluate the advantage of splitting the DNS traffic according to the queried FQDN rather than according to the IP addresses. This increases the efficiency provided by caching mechanisms, reduces the number of signatures to be checked, and can result in a 1.32 times more efficient architecture.

To design this new load balancing mechanism, we first need to characterize the DNS traffic and to evaluate how the DNSSEC traffic looks like. We perform data extraction from raw network captures taken from a DNS resolving platform. The main challenge here is to define the variables, which are taken and computed for each FQDN. The goal is to define a routing table mapping each frequently requested FQDN to a server of the resolving platform.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124115118000165

Which DNS server is not the authoritative source for a domain but only for name resolution?

caching-only server: A Domain Name System server that has the ability to process incoming queries from resolvers and send its own queries to other DNS servers on the Internet, but which is not the authoritative source for any domain and hosts no resource records of its own.

What type of DNS server is authoritative for a specific domain?

The second type of DNS server holds a copy of the regional phone book that matches IP addresses with domain names. These are called authoritative DNS servers. Authoritative DNS nameservers are responsible for providing answers to recursive DNS nameservers about where specific websites can be found.

What is authoritative name server DNS?

An authoritative server is the authority for its zone. It queries and is queried by other name servers in the DNS. The data it receives in response from other name servers is cached. Authoritative servers are not authoritative for cached data.

What are the 3 types of DNS?

There are three main kinds of DNS Servers — primary servers, secondary servers, and caching servers..
Primary Server. The primary server is the authoritative server for the zone. ... .
Secondary Servers. Secondary servers are backup DNS Servers. ... .
Caching Servers..

Toplist

Neuester Beitrag

Stichworte