Possible Xml Schema Types Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Id like to take some time today to explain some of the seemingly arbitrary limits placed on the XML data type, specifically those related to ID/IDREF validation, complex XML Schema types, the depth limit for XML data, and the enigmatic "XSD schema too complex" error.

ID/IDREF Validation

If your typed XML document has attributes of type xs:ID and/or xs:IDREF, SQL Server will enforce referential integrity on these attributes: Within a given document, no two attributes of type xs:ID may have the same value, and all attributes of type xs:IDREF must have the same value as some attribute of type xs:ID.

This requires the validator to remember which ID values it's seen. Since the validator does only one pass, it must also remember any IDREF values for which it has not yet encountered a corresponding ID. It is thus possible to construct an XML document which requires an arbitrary amount of memory to validate correctly. In order to prevent denial of service attacks, we capped the amount of memory available for this purpose at one megabyte. If you try to validate a document which exceeds this limit, validation will fail with error 6969:

ID/IDREF validation consumed too much memory. Try reducing the number of ID and IDREF attributes. Rearranging the file so that elements with IDREF attributes appear after the elements which they reference may also be helpful.

There's no simple way to describe the precise conditions necessary to produce this error, but the relevant factors are the number of ID and forward-referencing IDREF attributes and the lengths of their values. The cap is the same for the 32-bit and 64-bit versions of SQL Server, so there are some documents which will validate on the 32-bit version but fail to validate on the 64-bit version due to the larger pointer size.

Complex XML Schema Types

When submitting a schema to be added to an XML Schema Collection, you may see message 6998:

Type or content model '[TypeName]' is too complicated. It may be necessary to reduce the number of enumerations or the size of the content model.

When a type is needed for validation, the validator loads its definition from metadata and compiles it into a format suitable for quick validation. In order to prevent any one type from using too much memory, SQL Server caps the size of a compiled type at one megabyte. SQL Server compiles all types and performs this check when the schema is imported in order to avoid accepting types which exceed the limit.

As with the ID/IDREF limit, there's no simple way to describe precisely the conditions necessary to exceed this limit. Having a large number of attributes, a content model with many particles (xs:sequence, xs:choice, xs:all, xs:element, or xs:any), or many enumeration facets are the most likely causes. Note that the properties inherited from the base type or imported via xs:group or xs:attributeGroup references are expanded in the compiled type definition, so it's possible for a type to exceed the limit just by adding a few attributes to its base type, if the base type is near the limit.

The types of child elements, however, do not contribute to the limit. For example, you should have no problem defining a type whose content model contains several child elements, each of which has a different type whose compiled representation is 500K. If you find yourself running up against this limit, it may be helpful to split the type's properties between two or more sub-types.

"Schema Too Complex"

When adding a schema to an XML Schema Collection, you may occasionally run into error 2362:

XSD schema too complex.

This is somewhat misleading; what it actually means is that SQL Server is running low on stack space. We rely heavily on recursion for parsing and semantic validation of XML Schema documents, and in rare (and usually intentionally pathological) cases, this presents a danger of stack overflow, which would kill the process and crash the server. To prevent this, we check the remaining stack space at recursion points and abort the transaction if it's low enough to cause concern.

If you encounter this error and your schema is not intentionally pathological, you may be able to make some semantically insignificant changes that will allow SQL Server to process your schema. The most common causes of recursion are nesting and forward references. If you have several anonymous types nested in the Russian-doll style, it may help to unnest them and move the local element or type definitions up to the global level. Additionally, it may help to rearrange schema components to eliminate forward references--that is, try to make sure that component definitions precede their references in document order.

XML Depth Limit

Finally, SQL Server limits the depth of any XML instance, typed or untyped, to 128 levels. Conversion of a string with deeper nesting to XML will fail with error 6335:

XML datatype instance has too many levels of nested nodes. Maximum allowed depth is 128 levels.

We impose this limit in order to guarantee that we will be able to create an XML index for any XML column. In SQL Server, The primary key of an XML index consists of the primary key of the base table and the ordpath. The maximum length for an index key in SQL Server is 900 bytes, so the combined length of the base table's primary key and the ordpath must be 900 bytes or less. We decided to impose a limit of 128 bytes on the primary key of the base table, leaving 772 bytes for the ordpath. Based on the properties of ordpath, we decided that 128 levels would be a good upper limit to ensure that the ordpath never exceeds the maximum size.

A Real-Time Scenario

A system needs hard real-time functionality to retrieve information from an external source. The information is stored in the system and will be presented to the user in some graphical way. Figure 1 shows a possible scenario for this problem.

practical problems with any of the other limitations.

It is sometimes easier to deal with primitives as objects. Moreover most of the collection classes store objects and not primitive data types. And also the wrapper classes provide many utility methods also. Because of these resons we need wrapper classes. And since we create instances of these classes we can store them in any of the collection classes and pass them around as a collection. Also we can pass them around as method parameters where a method expects an object

 Comparison with real-time strategy

In general terms, military strategy refers to the use of a broad arsenal of weapons including diplomatic, informational, military, and economic resources, whereas military tactics is more concerned with short-term goals such as winning an individual battle.[12] In the context of strategy video games, however, the difference often comes down to the more limited criteria of either a presence or absence of base building and unit production.[citation needed]

Real-time strategy games have been criticized for an overabundance of tactical considerations when compared to the amount of strategic gameplay found in such games.[citation needed] According to Chris Taylor, lead designer of Supreme Commander, "[My first attempt at visualizing RTSs in a fresh and interesting new way] was my realizing that although we call this genre 'Real-Time Strategy,' it should have been called 'Real-Time Tactics' with a dash of strategy thrown in."[13] Taylor then went on to say that his own game featured added elements of a broader strategic level.[13]

In an article for Gamespy, Mark Walker said that developers need to begin looking outside the genre for new ideas in order for strategy games to continue to be successful in the future.[12]

In an article for Gamasutra, Nathan Toronto criticizes real-time strategy games for too often having only one valid means of victoryâ€"attritionâ€"comparing them unfavorably to real-time tactics games. According to Toronto, players' awareness that their only way to win is militarily makes them unlikely to respond to gestures of diplomacy; the result being that the winner of a real-time strategy game is too often the best tactician rather than the best strategist.[14] Troy Goodfellow counters this by saying that the problem is not that real-time strategy games are lacking in strategic elements (he calls attrition a form of strategy); rather, it is that they too often rely upon the same strategy: produce faster than you consume. He also says that building and managing armies is the conventional definition of real-time strategy, and that it is unfair to make comparisons with other genres when they break convention.[15]

This example describes a scenario where the administrator is required to create a new private queue that receives multiple messages that are eventually processed by an internal application. The administrator is required to constantly monitor the number of messages in the private queue to ensure that the queue size does not grow beyond its allocated quota, causing MSMQ to drop messages destined to the private queue.

This example adheres to the following constraints:

The administrator is able to create new private queues remotely.

Administrative operations are authenticated.

The monitoring of the queue state is done in real time.

 To run the acceptance tests

Open the solution file RealTimeSearchQuickstart (VS2005)_FunctionalTests.sln.

Fix the references to Interop.SHDocVw.dll, Rhino.Mocks.dll, and WatiN.Core.dll assemblies in the RealTimeSearchQuickstart (VS2005)_FunctionalTests project.

Run the tests using the Test Manager

To see the real-time search behavior

Run the QuickStart.

On the Search Customer page, enter values in the input fields Name, City, State and/or Postal Code; the search results will appear as you type.

For example, type B in the Name field; you will get twenty results. After the "B," type an o; you will get four results. Append the letter n, and the result set will be reduced to two results. Finally, append the letter d; you will see one result. Figure 2 illustrates the search results after you type Bo.

A real-time thread living inside a native Win32 DLL receives an interrupt from an external source. The thread processes the interrupt and stores relevant information to be presented to the user. On the right side, a separate UI thread, written in managed code, reads information that was previously stored by the real-time thread. Given the fact that context switches between processes are expensive, you want the entire system to live within the same process. If you separate real-time functionality from user interface functionality by putting real-time functionality in a DLL and providing an interface between that DLL and the other parts of the system, you have achieved your goal of having one single process dealing with all parts of the system. Communication between the UI thread and the real-time (RT) thread is possible by means of using P/Invoke to get into the native Win32 code.