• C
    Chris Sims

    Personally haven't used it, but at my company it is done with Windows Deployment Services on a Windows Server in addition to MDT images. Unfortunately, there is no easy answer to your last question. This is a very complex topic. :)

    Here's a guide to getting started with it: https://technet.microsoft.com/en-us/itpro/windows/deploy/get-started-with-the-microsoft-deployment-toolkit

    posted in Microsoft read more
  • C
    Chris Sims

    old post, but I saw it so I figured I'd answer that last question:

    The main reason is simply this, gMSAs use the same classes as the user and the computer, which is derived from TOP in the schema. The way of handling it 'old hat' and not running it as a super privileged account such as LocalSystem was to assign it a user account. This meant that every time the password expired you would have to update the password on the account(s) that you'd set up for these services, which could create downtime if you allowed them to expire. Since the world has been moving towards 'always-on', i.e. "HA Environments" having a service go down can be catastrophic. A lot of the best promise 'five nines' or 99.999% uptime, what this equates to in something humans recognize is something like 5 and a half minutes of downtime per YEAR. So in turn this leads to administrators simply setting the 'password never expires' flag on a service account. While it does work, its bad news mainly because if anyone ever steals or cracks that password they may have uninhibited control of your service. You may or may not have heard a few years ago about Target getting hacked. This was done through a point of sale service account because it had a weak password but was set to never expire, so it never was improved.

    So long story short, to avoid terrible security practices Microsoft's answer is to use the gMSA to have a user that is more similar to a computer account and updates its own random password with the domain every X number of days (think 30 by default)

    Does that clarify the reason for it?

    posted in Microsoft read more
  • C
    Chris Sims

    Ronnie is spot on, depends greatly on the need. Typically we define the DNS servers by Group Policies based on site (or even OU.. so depends on the AD structure too). Also DHCP servers setting the appropriate settings (also either per DHCP server or by policy that applies to set them on DHCP servers the policy applies to). At least this is true for internal clients, at any rate.

    For the assignment of DNS servers you often develop an internal strategy for this and build it out. For example, faced with this problem I might do something like set up site level group policy, incidentally often shunned by tribal IT knowledge because issues can become very difficult to detect if you do a poor job with policy. With that said however, say I have a site in the USA and a site in the UK. I may have appropriately named sites in AD Sites & Services. I can go there and create a policy which will apply DNS servers to all the clients based on being members of that; however it ties hand in hand with making sure your network team -- assuming that isn't also you -- is providing you the nitty gritty about all new subnets, because you have to add them to sites and services so that clients know where they are when they are making requests.

    Large companies -- meaning like your examples, i.e. global entities -- also often use GSLB, while I call this 'global site load balancing' i am not sure that's what it really stands for as an acronym, to route requests based on origination point. For example, if you access google from the USA you typically get 'google.com' as a response, where if you access from an IP range that is considered from Brazil, for example, you will most likely get redirected to their www.google.com.br site by the load balancing devices.

    Does that help?

    posted in Microsoft read more
  • C
    Chris Sims

    Wrap the result variable in "<$outputvariable>". It can be a little tricky sometimes using those inbuilt variables like Mike's example code in my experience, though so you may want to access the object by name. It might also help to see the type of object you've exported. You can do this by piping the object to Get-Member.

    The "**" asterisks on the outside of the variable resolved and wrapped in quotes should generate this exact name or anything like it. The purpose of contains and not contains however is to get exact matches and what's not in a list. It sounds like you might be better off with another switch or a regular expression to detect the desired result if you want it to list anything that matches a pattern.

    posted in Microsoft read more
  • C
    Chris Sims

    Re: Windows 10 and Customization with Sysprep

    You may also want to consider looking up DISM (Deployment Image Servicing Manager, i think anyway) which is a big part of the newer OS Image deployment toolkits.

    posted in Microsoft read more
  • C
    Chris Sims

    I haven't gone back through the courses in the last couple of months; however, some common pitfalls with this and someone learning. Using the AD REcycle Bin requires that you do so with a Domain ADmin account. I'm not familiar with the virtual environments that were recently set up for us yet but if you are not explicitly doing so through something like:

    $cred = Get-Credential -credential CONTOSO\Admin
    then typing the password into the field that pops up, then clicking OK to save it to the cred variable.

    Then when you do the commandlet:
    Get-ADObject -Filter <FilterForObjective> -includeDeletedObjects -credential $cred

    When that then returns the list you want, you would pipe it to " | Restore AD-Object"

    and those accounts would be restored. The important point of mentioning all this is that if you do not explicitly use your credential in the search, the results coming from the Recycle Bin are hidden. This can cause it to appear as if there are no results, leading to much frustration.

    posted in Microsoft read more
  • C
    Chris Sims

    If I'm reading you correctly with the contents of $a and $b, then what Mike offered with Compare-Object is your best bet. With its modifier switches it can show you what is unique about each list and what is equal about each after they have been read into an object with Get-Content.

    However, if you're just checking the value of A to see if it is in the list B, then his suggestion of -notContains is probably better as you can then iterate through $a with a foreach and simply check for a true or false on whether the appropriate attribute exists as an individual item in $b.

    Doing one of those two things will let you avoid your 3rd step and solve the issue.

    posted in Microsoft read more
  • C
    Chris Sims

    If you are asking what I think you're asking. Yes, sort of.

    What it does really is separate them and treat the server nodes as an extension of a namespace, so the node becomes an identical copy of each other. If one node fails, you simply replace it and join it and everything in the namespace is replicated to an identical root folder / share with that name.

    That's probably a somewhat simplistic view but you wouldn't for example lose your permission because the primary node failed, as long as you keep up with your replication and make sure it hasn't failed to replicate something across to one of the 'partners'.

    posted in Microsoft read more
  • C
    Chris Sims

    You can do this, but not directly. You have to use something called a Microsoft Transform file (.mst)

    http://windowsitpro.com/windows/q-can-i-specify-switches-msi-files-deployed-using-group-policy
    https://msdn.microsoft.com/en-us/library/aa367447(v=vs.85).aspx

    The transform file manipulates the MSI's database file on the fly to accomplish the goal. That being said, I've never used it, we stick with SCCM but that may be beyond the means of your school system (or at least the finance folks ;) )

    posted in General Discussion read more